November 16th is “Tutorials day”

No it’s not.  But maybe I should do so declare it!

“Tutorials Day” is when everyone is invited to read / watch / listen to tutorials of their choice and do something useful or artistic with what they learned today.

I was pretty busy (as usual) neglecting the usual housework and laundry chores  (no surprise) in watching and reading many tutorials on OpenGL and Blender.  The goal for today was achieving texturing.  While I disclosed my boredom with spinning cubes in my last post, truth be told that texturing a cube is hard enough for the amateur artist that I am.  And so I ached over numerous tutorials on Blender texturing.  None of them really hit gold, but CGCookie had one that I particularly liked although I found that learning half of it was enough for my goal.

opengl_textured_cube_thumb     blender_cube_thumb

The real challenge isn’t about successfully following tutorials, it’s making sure this knowledge will scale up as the number of cubes (or characters or houses or cars) will start to grow if I want to make anything useful.  I think I’ll be up to the challenge on the OpenGL side of things.  But for Blender ?  Getting a good artist on board might be a smarter move.

OpenGL 4+ : the first steps

I got a little depressed when I felt the learning curve, going from OpenGL 2 to more recent versions. NeHe‘s tutorials are deprecated and the current titles in the book stores take too much time to get to the “real stuff”. Luckily, the person running this tutorial site (According to NeHe, this guy’s name is “Damien”) offers a wide range of lessons in “modern” OpenGL to kick start your apprenticeship.  Not that the bigger books aren’t important, but as a hacker, I always learned by focusing my interest on building cool stuff first, then I crack open a book like OpenGL Superbible or the famous “Red book” to understand the more technical details.

And it’s not just about OpenGL, but also how to interact with other popular software packages like Blender.  There’s a nice tutorial on how to program your own OBJ format loader.  This gives a huge boost of confidence on what you can achieve and it adds the possibility to play and have fun with your code instead of watching rotating cubes.

I like the NASA 3D Resources page where you can use models of rockets and other space vehicles. International Space Station ? No problemo.  Convert to OBJ and voilà.


Next step: texturing.

For 15 years, I wanted to program my own raytracer …


Recommended by a friend, I checked out a site featuring a new, free and open-source Javascript superset called TypeScript, created by some developers at Microsoft.  It’s an interesting language for those of us who want Javascript to look more like an object-oriented language like C++.

But what really caught my attention was this sample code on the site that was an actual, living, breathing, raytracer.

OK, this doesn’t sound impressive to any of you, fine!  You probably rolled out your own in high-school, or in kindergarten for that matter.  But I’ve been struggling with some parts of it for the last 15 years because of my lack of math skills, to which I am slowly doing some catching up.  It took me several come-backs when, on an off, I would try to solve some basic stuff about shooting the darn rays.  And I didn’t want to just copy some code (there is plenty of raytracer code out code for free) but I wanted to understand what was going on.  Luckily, this one is simple enough for learning the basic nuts and bolts of raytracers, and so I gave it a shot and ported it from TypeScript to C++.  Porting also meant getting my hands dirty on the math because what was easily resolved in a TypeScript function didn’t mean that a C++ library call was equally available.  And TypeScript is still JavaScript so I had to really tear apart some of those embedded callbacks we love to hate when we’re dealing with this language for the first time.  Nevertheless, the port was successful and I am proud to show my first image with it (which, unsurprisingly, looks like the one from the Typescript site).

After poking around the Internet, I found that this TypeScript version was probably a port from this one written in C# by yet another prominent Microsoft developer.

My code is around 800 lines compared to the smaller size these versions have, 200 and 300 lines respectively.  I could work to make the code smaller, but it’s not my intention to publish it or anything, only to use it as a learning tool for better understanding graphics.

So while I didn’t program it from scratch, I think it’s the next best thing for learning how to program a raytracer.  I suggest you check that TypeScript code and make a port yourself.

The big question for me was “Why did it take me so long ? ” – besides having the attention span of a new-born puppy, jumping from project to project …

Part of the answer lies in a fundamental aspect of the raytracer: the camera.   I’ve noticed that tutorials on raytracers hardly ever discuss camera settings.  Camera positions, yes: the front, the up, the right, etc.  but not on how to align the rays in their starting positions in order to shoot rays.  I was always trying to achieve some form of circular disposition for my rays, naively following a sphere’s natural form, and that failed for some reason.  Imagine my surprise when I saw that this code didn’t really care about that.  It is flat based and each ray is sampled equally before rotation is applied to reach the desired angle.  Which kinda makes sense since it matches our “camera” view from a flat screen.

The results speak for themselves and I’m quite happy with it because I can focus on the more pleasing aspects of the job: showing off decent renderings, dealing with lights, shapes, etc.  Lots of work and study ahead but I never though it was going to be easy.

But regardless of the results, the code does present some challenges in the area of the camera: there is no parameter for setting a precise field of view (FOV), making some of the calculations seem arbitrary – like using 1.5 as a factor for the cross-product result of right and up, or the fact that the results can also be simplified because of an additional division by 2 on the raytracer’s screen operations.  Call it a gut feeling but I ran some measurements on the distance between each ray – or theta (θ) in trig parlance – and it is not equal amongst all rays.  The current camera has a FOV of about 41,11 degrees. I rendered a 10×10 image and the distance between the two furthest angles from the center compared to the the two closest ones drops to about 91%.  When I increased the number of pixels, my tests suggest a more significant drop, down to 87% on a 800×800 image.  Is it a “normal” way to set a camera ? The inevitable artifact of rendering to a flat screen ? I guess there’s always room for more personal research.

Lately, I’ve been playing with OpenGL 4.3.  This excellent tutorial on matrices in OpenGL really made my day.  I could probably apply some of that knowledge on raytracing.

return top