Tag Archives: Rendering

Mesh loader

I’ve started to work on fleshing out a proper set of mesh classes for handling mesh instancing, animation and whatnot. I scrapped my old .obj loader and wrote a new one that’s more compliant with current codebase. Here are a few shots of the space scene running with different meshes:

Given that I do my modeling in Blender 3D, I felt obligated to include Susanne, a.k.a the “Blender monkey” as one of my test objects. I haven’t setup trimesh collisions for Bullet yet, so Susanne is stuck with (rather poorly sized) bounding box. The tori use a cylinder. I’m really happy with how the lighting is working on smoother objects though, and with the scene’s performance in general.

T

Advertisements

New Light Manager and Deferred Rendering

I haven’t had as much time to work on Ion over the last few days, however I put in a few solid hours of coding today and finished the first version of my light manager. It’s only hooked into the deferred renderer at the moment, since I still need to finish the code that’ll allow the forward render to pull a limited number of lights based on some input factors. At the moment lights can have custom position, color, intensity and a variety of different fall-off equations. I’d like to add support for lens flares and cubemap masks later on, to add some extra visuals.

For the deferred renderer, the light values are stored in a pixel buffer which is passed into the final deferred shader. Unfortunately I’m doing something wrong with glTexSubImage2D and isn’t working properly, so I haven’t been able to add light animation to the system quite yet.

The scene setup I’m using to test things out has 7 lights in it; a white light at the center, four colored lights at the four sides of the cube stack, and then two small, bright lights (pink and cyan) directly next to the stack. I setup the scene to be in a 0-gravity space environment as well, so the effects of the lighting were more obvious. Here are a handful of screenshots of the test scene:

I also recorded a video, however QuickTime dropped the FPS down a bit. In the video each click adds another 6x6x6 group of physics cubes; at the end of the recording there are several thousand floating around. At one point there’s a bit of visual lag since I added several groups in quick succession, and the physics and render loop are tied together. Anyways, here’s the video: http://vimeo.com/28528048.

That’s it!

T


Dabbling with Deferred

In my last post about the new rendering system, I mentioned that one of my tests involved some basic deferred rendering. I only went as far as splitting the color, position and normals into separate targets, so today I decided to actually do something with them. I looked into some basic lighting equations and wrote a new shader to combine the render output. There are still some artifacts and incorrect spots as I’m not well versed on the topic, but I’m satisfied with my progress.

I ended up including five lights in the test scene: red/magenta, blue, purple, yellow and green. I don’t have a proper lighting system in place so they were simply hardcoded. Here are the results, from the Mac client of course:

This code won’t run on the iPhone, since there isn’t MRT support on it. It is possible to run a form of deferred rendering on the device using a variant of Wolfgang Engel’s method, as pointed out to me by a developer from Supermono Studios. Their team managed to produce a pretty impressive demo of the tech; a video of it can be seen on YouTube. This is another rendering technique I’d like to try at some point, but before that I need to finish the render manager and move the code over into the iPhone build.

T


Render Manager

I started working on implementing a proper render manager today. The scene graph will handle culling and checks to make sure an object should be rendered, at which point it will submit the object to the render manager’s queue. The queue is broken into several groups, such as Sky, Terrain, Mesh, Translucent and a Post based on what the object is. The Sky group, which will contain the skybox itself and any star fields or sun objects, will always be rendered first. Terrain is rendered second, then meshes, then translucent objects and then finally any post effects are applied.

The basic setup is complete, and I’ve all my existing renderable objects over to the new system. The system is designed to be extensible and new groups can be slotted in to suit the needs of the application. To test this out, I added in support for some basic deferred-style rendering that stores depth, color and normals from the scene to a gbuffer and then displays them in the post render group:

Deferred rendering is a feature I’d like to support on capable platforms, so I’m definitely going to be working this more in the future.

T


%d bloggers like this: