But what really caught my attention was this sample code on the site that was an actual, living, breathing, raytracer.
After poking around the Internet, I found that this TypeScript version was probably a port from this one written in C# by yet another prominent Microsoft developer.
My code is around 800 lines compared to the smaller size these versions have, 200 and 300 lines respectively. I could work to make the code smaller, but it’s not my intention to publish it or anything, only to use it as a learning tool for better understanding graphics.
So while I didn’t program it from scratch, I think it’s the next best thing for learning how to program a raytracer. I suggest you check that TypeScript code and make a port yourself.
The big question for me was “Why did it take me so long ? ” – besides having the attention span of a new-born puppy, jumping from project to project …
Part of the answer lies in a fundamental aspect of the raytracer: the camera. I’ve noticed that tutorials on raytracers hardly ever discuss camera settings. Camera positions, yes: the front, the up, the right, etc. but not on how to align the rays in their starting positions in order to shoot rays. I was always trying to achieve some form of circular disposition for my rays, naively following a sphere’s natural form, and that failed for some reason. Imagine my surprise when I saw that this code didn’t really care about that. It is flat based and each ray is sampled equally before rotation is applied to reach the desired angle. Which kinda makes sense since it matches our “camera” view from a flat screen.
The results speak for themselves and I’m quite happy with it because I can focus on the more pleasing aspects of the job: showing off decent renderings, dealing with lights, shapes, etc. Lots of work and study ahead but I never though it was going to be easy.
But regardless of the results, the code does present some challenges in the area of the camera: there is no parameter for setting a precise field of view (FOV), making some of the calculations seem arbitrary – like using 1.5 as a factor for the cross-product result of right and up, or the fact that the results can also be simplified because of an additional division by 2 on the raytracer’s screen operations. Call it a gut feeling but I ran some measurements on the distance between each ray – or theta (θ) in trig parlance – and it is not equal amongst all rays. The current camera has a FOV of about 41,11 degrees. I rendered a 10×10 image and the distance between the two furthest angles from the center compared to the the two closest ones drops to about 91%. When I increased the number of pixels, my tests suggest a more significant drop, down to 87% on a 800×800 image. Is it a “normal” way to set a camera ? The inevitable artifact of rendering to a flat screen ? I guess there’s always room for more personal research.
Lately, I’ve been playing with OpenGL 4.3. This excellent tutorial on matrices in OpenGL really made my day. I could probably apply some of that knowledge on raytracing.