I’ve started to work on fleshing out a proper set of mesh classes for handling mesh instancing, animation and whatnot. I scrapped my old .obj loader and wrote a new one that’s more compliant with current codebase. Here are a few shots of the space scene running with different meshes:
Given that I do my modeling in Blender 3D, I felt obligated to include Susanne, a.k.a the “Blender monkey” as one of my test objects. I haven’t setup trimesh collisions for Bullet yet, so Susanne is stuck with (rather poorly sized) bounding box. The tori use a cylinder. I’m really happy with how the lighting is working on smoother objects though, and with the scene’s performance in general.
I haven’t had as much time to work on Ion over the last few days, however I put in a few solid hours of coding today and finished the first version of my light manager. It’s only hooked into the deferred renderer at the moment, since I still need to finish the code that’ll allow the forward render to pull a limited number of lights based on some input factors. At the moment lights can have custom position, color, intensity and a variety of different fall-off equations. I’d like to add support for lens flares and cubemap masks later on, to add some extra visuals.
For the deferred renderer, the light values are stored in a pixel buffer which is passed into the final deferred shader. Unfortunately I’m doing something wrong with glTexSubImage2D and isn’t working properly, so I haven’t been able to add light animation to the system quite yet.
The scene setup I’m using to test things out has 7 lights in it; a white light at the center, four colored lights at the four sides of the cube stack, and then two small, bright lights (pink and cyan) directly next to the stack. I setup the scene to be in a 0-gravity space environment as well, so the effects of the lighting were more obvious. Here are a handful of screenshots of the test scene:
I also recorded a video, however QuickTime dropped the FPS down a bit. In the video each click adds another 6x6x6 group of physics cubes; at the end of the recording there are several thousand floating around. At one point there’s a bit of visual lag since I added several groups in quick succession, and the physics and render loop are tied together. Anyways, here’s the video: http://vimeo.com/28528048.
I started working on implementing a proper render manager today. The scene graph will handle culling and checks to make sure an object should be rendered, at which point it will submit the object to the render manager’s queue. The queue is broken into several groups, such as Sky, Terrain, Mesh, Translucent and a Post based on what the object is. The Sky group, which will contain the skybox itself and any star fields or sun objects, will always be rendered first. Terrain is rendered second, then meshes, then translucent objects and then finally any post effects are applied.
The basic setup is complete, and I’ve all my existing renderable objects over to the new system. The system is designed to be extensible and new groups can be slotted in to suit the needs of the application. To test this out, I added in support for some basic deferred-style rendering that stores depth, color and normals from the scene to a gbuffer and then displays them in the post render group:
Deferred rendering is a feature I’d like to support on capable platforms, so I’m definitely going to be working this more in the future.
I finished porting my engine to Mac earlier this morning. Only a small amount of work was required as most of the engine code is already platform agnostic. There were a few iOS-specific calls in the resource manager, such as use of UIImage, but changing them to OS X equivalents wasn’t a big deal. Those areas are in the bug tracker for refactoring though, especially for when the time comes to start putting out Windows versions. The other change was to use GLUT instead of the iPhone’s EAGL to actually display the OpenGL context.
To test things out, I ran an updated Server Build and connected to it with my iPhone, Mac and iPad simulator:
Running the whole simulation locally is also possible, and fully functional. Just one more screenshot for good measure, this time with the physics running on the client:
From here I plan to start working on a Windows client and server build as well. Stay tuned!
I’ve been doing a lot of optimizations recently, including tracking down a few memory leaks and increasing VertexBuffer performance. To put it numerically, I was only able to simulate ~64 physics cubes on the iPhone pre-optimization, but now I can have close to 300 active cubes and still maintain a solid frame rate. At one point the bottleneck was actually my rendering code, but I fixed the issue and now Bullet’s speed is the limiting factor. The Simulator still out performs the device a bit in this case, but that’s to be expected.
With the scene chugging along quite well, I decided to add accelerometer input to the mix. I’m using the accelerometer to set the gravity vector for the scene, so flipping the phone will flip the gravity. The only issue so far is that Bullet will not “wake up” sleeping physics objects when the gravity vector changes, so only active objects are affected by the accelerometer. It’s still pretty cool, so I decided to put up a video:
I’ve also uploaded a few videos from the Simulator showing the progress of the physics performance:
http://vimeo.com/27829407 – Poor performance
http://vimeo.com/27872952 – First pass at optimization
http://vimeo.com/27921243 – Fixed the rendering bug. 😉
Also, on another note I’ve decided to drop the “OpenGL ES Adventure” from my engine posts. It’s no longer just a rendering engine project, so I don’t think it’s suitable anymore. I still haven’t decided on a name for it yet, but when I do that’ll probably start popping up in the post titles instead.
I started working on writing a frustum culler (a thing that makes sure only objects in view are rendered) yesterday. I figured the hardest part would be the math for setting up the frustum shape, as it does require some linear algebra foo. It turned out to be much easier than I thought, as the ViewProjection matrix happens to have everything needed already in it. The frustum culling checks themselves were also pretty straight forward.
Everything was good until I ran the iPhone app. My scene of 64 physics cubes would initially load correctly, but as soon as I started to pan the camera groups of cubes would be randomly culled off the screen. This was at around 10:00pm. Since my ViewProjection matrix had worked flawlessly for all other rendering tasks, I assumed the issue must have been with the way the frustum was being generated. By around 12:00am I had tried every possible combination of creating my frustum, including treating the matrix as left/right-handed, row/column major and turning on and off normalization of the output planes. Nada. Rendering the planes in Graphing Calculator, minus the far plane, looked right:
So if the frustum construction code is right, then the ViewProjection matrix must somehow be wrong? Or maybe the frustum checks? Enter several more hours of no success, and an acceptance of defeat at 2:30 AM.
This morning I realized that my cubes’ position variables were not being set. The physics update code was copying the new positions directly into their mTransformMatrix variables and leaving the mPosition value at its starting position. The culler uses the position variable to determine where the object is. Dammit.
Needless to say, the frustum culler is now working. In fact, it’s the frustum culler from 10:00pm last night that works correctly. 😡
I’ve been a bit slack lately with blog updates, mainly because I’ve spent the last few days relaxing/recovering from my wisdom tooth surgery. The resulting discomfort made it surprisingly hard to stay focused long enough to do any programming. I’m almost back to normal now though and was finally able to concentrate on getting some work done today. I haven’t started on my UI system yet as there’s still some planning I’d like to do, so I decided to play around with the Bullet Physics library to see how it would work with the iPhone.
I’m quite fond of game physics, as my first large programming project was writing a complete PhysX implementation for Torque Game Engine Advanced. Unfortunately PhysX is closed-source, and since nVidia doesn’t offer an iPhone version it’s not an option. There are a few other good libraries out there, e.g. Newton and ODE, but I went with Bullet for no real compelling reason. I’m still not completely sold on it being the best solution though, as the API is quite complex and there’s a ton of stuff I won’t need. Something lightweight like Tokamak or True Axis might be better,especially for mobile development. But I digress, back to Bullet!
I wanted to compile Bullet as a Framework to make it easy to include in the Xcode project. Unfortunately Bullet is only setup to compile with MS Visual Studio by default, so I had to do some fiddling/learning CMake to generate myself an Xcode project to compile the Frameworks. Of course, once they were compiled I discovered that the target CPU architecture was wrong and the Frameworks wouldn’t work with the iPhone. I’ve actually yet to figure out how to compile a Framework targeted for the iPhone, so for the time being I’ve just dumped the necessary Bullet source files right into my GL Engine project. It’s not the solution I’d hoped for, but it did allow me to finally get going and actually use the damn thing. 😛
I haven’t done anything overly fancy yet, but Bullet did integrate into my engine without a lot of trouble. I wrote a simple singleton Physics class, which is essentially an implementation of the Bullet HelloWorld program. SceneObjects can then plug into that class and add/manage a Bullet rigid body. Right now I’ve only used it to create a ground plane and a few spheres though.
I intend to work with physics quite a bit more in the future, and possibly even use a physics library to handle collision detection in the engine. That’s all for now though.