Tag Archives: ES 2.0

Optimizations and Physics

I’ve been doing a lot of optimizations recently, including tracking down a few memory leaks and increasing VertexBuffer performance. To put it numerically, I was only able to simulate ~64 physics cubes on the iPhone pre-optimization, but now I can have close to 300 active cubes and still maintain a solid frame rate. At one point the bottleneck was actually my rendering code, but I fixed the issue and now Bullet’s speed is the limiting factor. The Simulator still out performs the device a bit in this case, but that’s to be expected.

With the scene chugging along quite well, I decided to add accelerometer input to the mix. I’m using the accelerometer to set the gravity vector for the scene, so flipping the phone will flip the gravity. The only issue so far is that Bullet will not “wake up” sleeping physics objects when the gravity vector changes, so only active objects are affected by the accelerometer. It’s still pretty cool, so I decided to put up a video:


I’ve also uploaded a few videos from the Simulator showing the progress of the physics performance:

http://vimeo.com/27829407 – Poor performance

http://vimeo.com/27872952 – First pass at optimization

http://vimeo.com/27921243 – Fixed the rendering bug. 😉

Also, on another note I’ve decided to drop the “OpenGL ES Adventure” from my engine posts. It’s no longer just a rendering engine project, so I don’t think it’s suitable anymore. I still haven’t decided on a name for it yet, but when I do that’ll probably start popping up in the post titles instead.


An OpenGL ES Adventure – Part 6

I’ve been a bit slack lately with blog updates, mainly because I’ve spent the last few days relaxing/recovering from my wisdom tooth surgery. The resulting discomfort made it surprisingly hard to stay focused long enough to do any programming. I’m almost back to normal now though and was finally able to concentrate on getting some work done today. I haven’t started on my UI system yet as there’s still some planning I’d like to do, so I decided to play around with the Bullet Physics library to see how it would work with the iPhone.

I’m quite fond of game physics, as my first large programming project was writing a complete PhysX implementation for Torque Game Engine Advanced. Unfortunately PhysX is closed-source, and since nVidia doesn’t offer an iPhone version it’s not an option. There are a few other good libraries out there, e.g. Newton and ODE, but I went with Bullet for no real compelling reason. I’m still not completely sold on it being the best solution though, as the API is quite complex and there’s a ton of stuff I won’t need. Something lightweight like Tokamak or True Axis might be better,especially for mobile development. But I digress, back to Bullet!

I wanted to compile Bullet as a Framework to make it easy to include in the Xcode project. Unfortunately Bullet is only setup to compile with MS Visual Studio by default, so I had to do some fiddling/learning CMake to generate myself an Xcode project to compile the Frameworks. Of course, once they were compiled I discovered that the target CPU architecture was wrong and the Frameworks wouldn’t work with the iPhone. I’ve actually yet to figure out how to compile a Framework targeted for the iPhone, so for the time being I’ve just dumped the necessary Bullet source files right into my GL Engine project. It’s not the solution I’d hoped for, but it did allow me to finally get going and actually use the damn thing. 😛

I haven’t done anything overly fancy yet, but Bullet did integrate into my engine without a lot of trouble. I wrote a simple singleton Physics class, which is essentially an implementation of the Bullet HelloWorld program. SceneObjects can then plug into that class and add/manage a Bullet rigid body. Right now I’ve only used it to create a ground plane and a few spheres though.

I intend to work with physics quite a bit more in the future, and possibly even use a physics library to handle collision detection in the engine. That’s all for now though.


An OpenGL ES Adventure – Part 5

I started working on an input layer today. At the moment it’s more or less tied to the iPhone’s UIKit/UIGestureRecognizers, but eventually I plan to write a pretty robust system that will allow any input type (single tap, pan gesture, pinch, mouse click, keyboard, etc) to be mapped to a specific action in the engine. The available event types will, of course, be configured based on the target platform.

To test the initial progress on the input layer, I made some small changes to my CameraObject class to allow changes in yaw and pitch. Attached below is a short video showing these new additions, a single-tap events paired to a SceneObject:


For my next post, I hope to have some basic UI work done as well, so I can test out some of the interface ideas I posted about earlier. That’s all for now though.


An OpenGL ES Adventure – Part 4

I started work on a simple texture manager today, to add a bit of life to meshes. To test it I wrote a basic skybox class that takes 6 images and renders them to a cube. The attached screenshots are from an iPhone, since anti-aliasing (I implemented this too) doesn’t seem to work on the simulator:

I’m also nearly finished moving all of my GL calls into a new Abstract Graphics Layer (AGL). The purpose of this is to make the code more modular, allowing for additional graphics libraries to be added without touching the rest of the engine code. Instead of using the following code:

glGenBuffers(1, &mBufferID);
glBindBuffer(GL_ARRAY_BUFFER, mBufferID);
glBufferData(GL_ARRAY_BUFFER, sizeof(mVertexArray), mVertexArray, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(VertexPN), 0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(VertexPN), (GLvoid*) (sizeof(F32) * 3));
glDrawArrays(GL_TRIANGLE_STRIP, 0, mVertexCount);

I can do the following:

mVertexBuffer = ResourceManager::getVertexBuffer("Sphere_50", eBufferStatic);

The resource manager will automatically return a pointer to the correct VertexBuffer subclass based on global configuration; in this case it returns a GLVertexBuffer.

Finally, given that my codebase is starting to get fairly large, I’ve decided to start using MantisBT to maintain to-do lists and keep track of bugs. I’m running a local WAMP server on my desktop, so I can quickly access the bug tracker any time I find something that I need to come back to later. Although it’s team-oriented, Mantis has already been very helpful in keeping on top of things even for a single developer.

That’s all for the time being, stay tuned!


An OpenGL ES Adventure – Part 3

Resource management is key regardless of what platform you’re developing with, and is especially important on a mobile platform like the iPhone where memory is limited. Consequently, I’ve been working on a resource manager for my engine.

The resource manager is designed to not only handle file-based resources, such as images, shaders, models and other art assets, but GL resources as well such as vertex buffers and compiled shaders.

Resources are split into two different class trees: InternalResource objects and ExternalResource objects. When a resource is loaded for the first time a key-value pair is stored in a dictionary, where the key is a unique identifier for that specific resource. If the same resource is needed again, it is loaded from the dictionary. For example, the first time a mesh file named “FancyCar.obj” is loaded it is parsed by ObjMeshParser that produces a vertex/index buffer pair for rendering. The vertex buffer would be stored under the name “int_vbuff_fancycarobj.” Any subsequent attempts to load that .obj file would return the existing resources rather than re-parsing it from the file.

On another note, as I mentioned in the last post I’m looking for an ideal model format. The Obj format is quick and easy to use, however it lacks support for animations. For the time being it’s sufficient, but I would like to move to something more flexible later on. I did look into the .blend format, but it includes too much extra data such as cameras, lights and rendering settings. Collada does sound like it would work well, but I’ve never used it or done any XML parsing before. The other option would be to design a model format that’s compatible with my engine. This would make it easier to have precise control over the data that’s included in a the model, and would also allow for storing compatible material properties directly in the files. It would require additional coding and research though, so I’ll have to think on this one.


An OpenGL ES Adventure – Part 2

As I mentioned in my previous post on this topic, I’ve been working on a scene graph, render manger and material system for my GL engine. Some of the stuff has induced a bit of head-scratching, but on the whole things are coming together pretty well. I thought I’d share a bit about what I have in place now, and a few things I hope to complete soon.

My current scene graph implementation is probably not very optimal, but it works for what I need now. It’s essentially just a hierarchy of SceneNode objects and their subclasses. Each SceneNode can have an arbitrary number of children, which are stored in a custom dynamic array class.  A SceneNode is the highest level class; it’s not a renderable object, but is used for grouping objects. It’ll be able to act as “folder” in the scene graph. The SceneObject, which is a direct subclass of SceneNode, is the base class for all renderable objects. Sky, water, meshes etc will all be in the class tree of SceneObject.

The second SceneNode subclass I have written at the moment is a SceneCell, which is the physical equivalent of a SceneNode. It groups SceneObjects into renderable cells, which allows for quick culling. Cell_0 is always visible, as it contains basic environment objects like the sky, terrain, large bodies of water, etc. From that point on, a maximum of 4 cells will be visible at any time depending on what’s in view. The cell the camera is in will obviously always be rendered, and any adjacent cells that are visible.The contents of the cells will be further culled based on the view. The maximum occurs at the corner case. For example, if the camera is in the yellow cell then the green, red and blue cells will have potential renderable objects in them:

When the scene is traversed, if an object is deemed fit for drawing it is added to the appropriate group in the RenderManager. This set of classes will handle materials, batching and the actual GL calls to render the frame. This code is really in its infancy at the moment.

Anyways, that’s all for now. Hopefully I’ll have some more attractive in-engine screenshots for the next blog post on this topic. I still have to write a mesh loader, since I’m just using const arrays of vertices for all my objects at the moment. I’m considering using Blender’s .blend format for all my models, since it contains all the data I’ll need. I’ll have to look into that in a bit more detail though. I’ve also heard Collada is a good format.


An OpenGL ES Adventure – Part 1

I decided to take a stab at writing an iPhone game engine in OpenGL ES, with the intent of gaining a better understanding of GL and graphics programming in general. I’ve already done a bit of work with DirectX and general 3D graphics programming via Torque Game Engine Advanced, but this is my first attempt at writing a rendering engine from scratch. I’ve been working on this for around two weeks now, including doing a bit of research into GL itself and reading some 3D math primers. I should also mention that my end goal is to support both GL ES 1.0 and 2.0, although at the moment I’m only working with ES 2.0.

As it is now, I’ve completed a basic math library and started working on scene management and materials. The Matrix class was largely based on the one included in the Oolong Engine for iOS, as my knowledge of matrices is limited. I’ve also started to abstract as many of the rendering calls as possible to make it easier to add ES 1.0 support down the road.

There aren’t a lot of visuals to show right now as I’m still at the “single spinning object” stage as far as my scene goes, but here we are:

Both objects are running a basic lighting shader with a single light in the scene. That’s really about all for now. I plan on posting up some of my engine structure documents later, once I move them into something digital (they’re notes on paper at the moment).


%d bloggers like this: