Engine start-up (aka “Behold: the octree”)

I decided to start working on things other than OpenGL and focus on a few building features just for fun.  Voxel-based games are popular these days, thanks to Minecraft, Ace of Spades and upcoming Vox and Stonehearth, so I decided to look around that for info.  The creator of Vox made this web site on the intricacies of voxel engine building. But I decided to poke around, mixing and matching various approaches just to see if something interesting would come out.

First, it was necessary to see if my standard OpenGL commands that I wrapped around useful classes would hold.  I built a class that would solely focus on chunk rendering with an automated routine that chipped away a block at every pass, like a voxelized Swiss cheese. It was a naive first step but I was happy with the results.

cube00-thumbcube01-thumbcube02-thumb

As you can see in the images below, even with a little more than 300 cubes, the number of vertices is abnormally high, even if I’m careful at removing those that are common to adjacent cubes.  If I used a bigger chunk, OpenGL would choke.  Now, part of chunk management is to keep your chunks at a reasonable size, but that’s more for the rebuilding phase.  A single chunk that brings your video card to its knees is not a good sign.  Even with smaller chunks, you still need a lot of them and they generate a lot of vertices.  A lot!

cube03-thumb cube04-thumb

Of course, in a normal voxel-based game, cubes don’t tend to be cut away in such fashion.  In this demo, more vertices appear because the number of solitary cubes goes up, thus rendering the full 36 vertices that make a complete cube.

But the problem with this approach is that I still send information about those vertices that face away from the camera, the ones that are behind the three sides shown for each cube.  Culling is usually the answer to this: it is the process of not drawing a polygon if it faces away from the camera and OpenGL can do this on demand.  But that doesn’t stop me from over-flowing the vertices pipeline.  So why send so much information when half of it won’t be rendered anyway?

So I then rolled out my own culling process to prevent sending unwanted vertices, but it wasn’t a rousing success.  I tried some simple way to prevent rendering vertices if the chunk would be wholly on one side or the other of an axis. Things got ugly when a chunk shared both sides of the axis or if it was rotated … I ended up with additional or missing vertices.  A real mess.

I was looking for a way to subdivide the chunk and orient the culling on its individual sections when I ended up reading about octrees.  And then my brain started to expand.

An octree is a way to understand the context of a single point in space by sub-dividing the “world” into 8 blocks.  If a point is within one of these blocks, we subdivide that block again into 8 blocks, then again, and again, until the relevant information is matched (or not); e.g.: if you want to evaluate the possibilities of collision between a bullet and potential target, just subdivide your space toward the bullet and see if the target shares its space.

Below, you have a cubic “bullet” zooming into a fictitious – yet dangerous – world.  See how wireframed cubes are forming around it, defining a context in which other elements in the game may be analyzed.

octree00-thumb   octree01-thumb

By this “divide and conquer” approach, you can get information very quickly on very large surfaces.  In this frame below, if our bullet would be a meter long and wide (more like a giant cannonball that a tiny bullet), then the entire “world” this larger cube represents would be half a kilometre – 512 meters to be exact.  And yet it took little more then 9 sets of guesses (because 512 is 29) to zoom in on the bullet.

octree02-thumb

All in all, an octree is a perfect tool for a voxel-based engine: cube positioning and orientation, occlusion culling, targeting … I’ll have to review my first attempt and see how octrees can be an integral part of the engine now.

2D over 3D

So I implemented this tutorial from my favourite place.  While the results were good, I started to hack around them in order to create A Better Font Management System™.  It went on for days …

This first thing I did was dealing with the colour.  This is an old trick I learned with OpenGL: it’s easy to colour white fonts to any other colour simply by playing with the colour calls. Say you have, like most games today, a chat system with different colours for highlighting various functions: local chat, world chat, whisper, system messages …  Well, you’re not going to create a strip of coloured fonts for each of these functions.  Previously, it was all about using glColor() before each draw call.  Now with shaders, it’s just a matter of adding these 2 lines in a simple fragment shader for textures.

uniform vec3 Filter;
(...)
color = texture2D( myTextureSampler, UV ) - (vec4(1.0, 1.0, 1.0, 0.0) - vec4(Filter, 0.0));

This may not look like much, but this is actually the first time I modified a shader by myself for an original solution.  There are other ways to do it, but this one is mine. 🙂

 

fontcolors-thumb

The second thing to consider was the usual warnings about using foreign languages.   The tutorial repeats this fact but for the scope of the tutorial, it still uses the old ASCII-based grid trick for building the font map, which mean that it’s tailored for an anglo-saxon public. To create another mapping (and there are plenty out there), you must create another map of those fonts. Another problem I find with a fixed ASCII grid is that you may not want to write text at all but simply use numbers with commas, dots and colons and a few unusual dingbats for good humour.  In that case, all the other characters are wasted.  So instead of automagically mapping your UVs as mathematical offsets to your grid based on the ASCII code of your characters, it’s better to have a single strip of random characters, each isolated by a grid that remembers the Unicode number for each one.

Now Unicode isn’t that hard to learn. I found this article very useful to hack my own UTF-8 based system.

 

jpfont-thumb

So then it’s just a matter of scanning the strip to isolate the rectangles surrounding each character, packing them (using a solution similar to what I presented on my last post), and, finally, keeping a separate file for the UVs and any additional info necessary in a manageable format like XML or JSON.


json-thumb   packing-thumb
In this “game” (together with Oh, so beautiful Suzanne), I stored each rectangle positions in an STL map with the Unicode number as key.  Each time I scan a character, I search the key and it returns the correct rectangle in the map.

finalrender-thumb

I stumbled against a few blocks along the way. I panicked a bit when I saw that the rendering was not quite pixel-perfect … Did something go wrong in the packing ?  Are the characters packed too close ?  I found that for a successful 2D rendering, you must turn off filtering.  The fact that the tutorial used a TGA file instead of the usual PNG or DDS format may have been for the practical reason of isolating the filtering and mip-mapping routines that are so common in the other tutorials.  And I naïvely assume that I should write them as part of the loading process for any map.

 

pixelperfect-thumb

So that was fun.  I’m far from a complete solution, as there are many tricks to master: scaling, rotating, alignment, spacing … but there are other times when fonts are not necessary and rendered text on a quad is the best way to display an original message.

Also, my current system renders fonts with FreeType (they provide an excellent tutorial here) so you can see a bit of no-so-smooth contours.  It provides a nice quick solution but creating beautiful fonts is also an art – and my knowledge of FreeType is limited.  So retouching them with The Gimp improved things dramatically as you can see in the “18.018” image above.

I’ll be taking a break from the standard OpenGL tutorials and work on some new exciting stuff in terms of game engine.

See ya soon and Happy Hacking.

Packing it up

I got pretty busy last week trying to implement a tutorial on 2D text.  The experience was successful and I’ll be posting some results soon.  But the more I got into it, the more I saw possibilities to expand on it and I pulled out some old code from my OpenGL 2 library and started to geek out on an actual tool to map and pack 2D font textures (or any textures actually) into larger maps.

Discussing the “best” algorithm for packing is akin to Ukrainian mothers discussing their families’ Borscht recipe: everybody think theirs is the original and the best (or so I’ve been told by Ukrainian acquaintances).

There are different ways of naming the problem (whether it’s texture packaging or The Knapsack Problem), but pretty much everyone agrees that finding some kind of optimization to it is NP-Hard, which means that finding a fast solution to pack all your textures in the most efficient way will take more time that you’re willing to spare to code, test and execute this beast.  Which is why most lazy persons like myself use one that “just works” and move on.  One day, if the solution causes any trouble, then we put time and money on finding a better solution (and a chance of having a slightly better efficiency rate, say 3% like some guy I read earlier on Stack Overflow).

Basically, I like this developer’s solution.  It’s simple to understand and code and it gives great results, as you can see in the image below.  This was done with a set of 3 tiles ordered randomly; the recursive algorithm aborts when it meets it’s first tile that cannot be placed.  And I’m sure we could squeeze a few more green ones.  This was a simple test and already, this looks fine.  But in a real-life situation, tiles are first sorted from biggest to smallest before packing.  And for those that wouldn’t fit, it would be trivial to implement some sort of overflow tile.

See you next time when we discuss 2D text and see how all this fits together.

pack

return top