Posts Tagged ‘ math

For 15 years, I wanted to program my own raytracer …


Recommended by a friend, I checked out a site featuring a new, free and open-source Javascript superset called TypeScript, created by some developers at Microsoft.  It’s an interesting language for those of us who want Javascript to look more like an object-oriented language like C++.

But what really caught my attention was this sample code on the site that was an actual, living, breathing, raytracer.

OK, this doesn’t sound impressive to any of you, fine!  You probably rolled out your own in high-school, or in kindergarten for that matter.  But I’ve been struggling with some parts of it for the last 15 years because of my lack of math skills, to which I am slowly doing some catching up.  It took me several come-backs when, on an off, I would try to solve some basic stuff about shooting the darn rays.  And I didn’t want to just copy some code (there is plenty of raytracer code out code for free) but I wanted to understand what was going on.  Luckily, this one is simple enough for learning the basic nuts and bolts of raytracers, and so I gave it a shot and ported it from TypeScript to C++.  Porting also meant getting my hands dirty on the math because what was easily resolved in a TypeScript function didn’t mean that a C++ library call was equally available.  And TypeScript is still JavaScript so I had to really tear apart some of those embedded callbacks we love to hate when we’re dealing with this language for the first time.  Nevertheless, the port was successful and I am proud to show my first image with it (which, unsurprisingly, looks like the one from the Typescript site).

After poking around the Internet, I found that this TypeScript version was probably a port from this one written in C# by yet another prominent Microsoft developer.

My code is around 800 lines compared to the smaller size these versions have, 200 and 300 lines respectively.  I could work to make the code smaller, but it’s not my intention to publish it or anything, only to use it as a learning tool for better understanding graphics.

So while I didn’t program it from scratch, I think it’s the next best thing for learning how to program a raytracer.  I suggest you check that TypeScript code and make a port yourself.

The big question for me was “Why did it take me so long ? ” – besides having the attention span of a new-born puppy, jumping from project to project …

Part of the answer lies in a fundamental aspect of the raytracer: the camera.   I’ve noticed that tutorials on raytracers hardly ever discuss camera settings.  Camera positions, yes: the front, the up, the right, etc.  but not on how to align the rays in their starting positions in order to shoot rays.  I was always trying to achieve some form of circular disposition for my rays, naively following a sphere’s natural form, and that failed for some reason.  Imagine my surprise when I saw that this code didn’t really care about that.  It is flat based and each ray is sampled equally before rotation is applied to reach the desired angle.  Which kinda makes sense since it matches our “camera” view from a flat screen.

The results speak for themselves and I’m quite happy with it because I can focus on the more pleasing aspects of the job: showing off decent renderings, dealing with lights, shapes, etc.  Lots of work and study ahead but I never though it was going to be easy.

But regardless of the results, the code does present some challenges in the area of the camera: there is no parameter for setting a precise field of view (FOV), making some of the calculations seem arbitrary – like using 1.5 as a factor for the cross-product result of right and up, or the fact that the results can also be simplified because of an additional division by 2 on the raytracer’s screen operations.  Call it a gut feeling but I ran some measurements on the distance between each ray – or theta (θ) in trig parlance – and it is not equal amongst all rays.  The current camera has a FOV of about 41,11 degrees. I rendered a 10×10 image and the distance between the two furthest angles from the center compared to the the two closest ones drops to about 91%.  When I increased the number of pixels, my tests suggest a more significant drop, down to 87% on a 800×800 image.  Is it a “normal” way to set a camera ? The inevitable artifact of rendering to a flat screen ? I guess there’s always room for more personal research.

Lately, I’ve been playing with OpenGL 4.3.  This excellent tutorial on matrices in OpenGL really made my day.  I could probably apply some of that knowledge on raytracing.

return top