I always preach that you should use the right tool for the right job. 2D is great if you want to control every last pixel on the screen but falls far short of being as flexible as a 3D engine. There are plenty of games out there that take advantage of both technologies. They generally do this by using 3D Studio Max or even hand painted art to draw static backgrounds and use 3D to draw dynamic objects and characters.
This is one of my all time favorite games. The use of mixing 2D and 3D display technology is only shadowed by the brilliant design, especially the dialogue. If you haven't played it you should add it to your to do list.
Look at Figure 2.8 and you can tell immediately that the sets were created and pre-rendered in something like Max or perhaps Softimage. This game came out in 1998, when it was impossible to render these environments in real time with such accurate lighting effects. The characters achieved a very interesting look with very few polygons, which enabled Grim Fandango to run on computers without hardware acceleration.
Figure 2.8: Screenshots of Grim Fandango by Lucas Arts.
The 3D characters could walk behind objects in the 3D sets, which could be achieved as easily as layered sprites but with today's hardware a pre-rendered depth buffer would be a better choice. Modern art tools like 3D Studio Max give artists the option of rendering the depth buffer along with the RGB channels, so generating this data is a piece of cake.
The characters reacted to things in their environment. The main character, Manny Calavera, would turn his head to look at interesting objects and he would stop walking when he ran into a solid object. Lucas Arts probably created a custom tool to create and maintain this data, but if they were going to do the same thing today they could simply use 3D Studio Max. Dummy objects and user data can go a long way to providing everything your game needs to make Manny take a longing glance at his would-be girlfriend, Mercedes Colomar.
Hybrid 2D and 3D technologies can cause quite a bit of trouble for video drivers. Most drivers are written with the assumption that graphics applications will be all 2D or all 3D and nothing in between. Mixing 2D and 3D calls will cause the card to flush any pending operation such as drawing a set of polygons. This causes performance to drop significantly.
Graphics performance is optimized if the CPU and the GPU are working full speed 100% of the time. You can imagine two railroad workers hammering in a spike, each striking the spike in opposite rhythm from each other. If their timing is perfect they can cut the time it takes to perform their task in half. If their timing is exactly wrong, their work will come to a complete halt, since they'll constantly hit each other instead of their target. The same is true for optimizing your graphics performance.
An excellent graphics programmer can write the graphics pipeline to minimize any stalling. The CPU and GPU work at nearly 100% efficiency each on their own tasks. As soon as the GPU finishes drawing something the CPU has already provided it with the next set and it never skips a beat. This beautiful scenario is utterly destroyed if you mix 2D and 3D graphics calls. You'll need to do this if you have 2D and 3D objects sorting on top of each other in the same screen. Some graphics hardware and video drivers misbehave badly if you attempt to use hardware acceleration. You could see corrupt graphics, terrible performance, and even the dreaded blue screen of death.
There are a few solutions to this problem, given a few assumptions in your game design.
Draw everything in 3D: 2D sprites are just a degenerate case of an arbitrary 3D polygon that happens to lie in parallel with the screen and doesn't use perspective transformations.
Draw 2D and 3D in different passes: Draw all your 2D sprites at once, then your 3D polygons, and blend the two buffers.
The only problem moving to a completely 3D engine is that your game will require a 3D video card. This might be fine if you are making the latest iteration of Unreal Tournament, but probably not such a good idea if you are making Microsoft Casino.
|A Tale from the Pixel Mines|| |
Microsoft clearly didn't understand this concept when they proposed the API for DirectX 8, which dropped the DirectDraw API completely in favor of a "unified" graphics API. When I used DirectX 8 for the first time and saw the error message in my compiler: "Warning: Undefined symbol BltFast," I thought that they must have renamed it. I searched for the method in the DirectX help. Imagine my horror when I found that not only was BltFast missing, so was the entire DirectDraw interface! I sent an email to a friend of mine on the DirectX team asking about the missing API. The conversation went something like this:
Mr.Mike: So, let me make sure I understand this completely. You nuked DirectDraw from DX8?
DirectX Buddy: Yeah! Isn't it great?
Mr.Mike: How do I draw my game backgrounds?
DirectX Buddy: Oh that's easy. Just draw them as 3D polygons.
Mr.Mike: Won't that be terribly slow?
DirectX Buddy: Not at all. Hardware acceleration makes drawing polygons really fast.
Mr.Mike: Yeah...if I make the min spec require a 3D video card.
DirectX Buddy: Everyone loves 3D video cards! They're everywhere! No one uses 2D only video cards anymore.
Mr.Mike: Oh yeah? Did you happen to notice that four of the top ten PC games last Christmas were all 2D games like Hoyle Casino, Who Wants to be a Millionaire, and the Sims?
DirectX Buddy: Well..., uh......
...and the coups de grace...
Mr.Mike: Did you realize that the game I'm currently working on, Microsoft Casino, has a min spec that doesn't include 3D hardware?
DirectX Buddy: I'll get back to you.
It turned out that DirectDraw was miraculously back in DirectX 9. Go figure.
Drawing 2D and 3D in different passes actually works quite well. If your polygons sort in one layer with the 2D sprites you can render all your 3D polygons to an off-screen buffer, and blit the entire thing as one sprite. If you need to intersort 2D sprites and 3D polygons use a depth buffer. Create the depth buffer by associating a depth measurement with each layer of 2D sprites; when each 2D sprite is drawn make the associated entries into the depth buffer. Use this depth buffer when you send the 3D polygons into the GPU, and the polygons will sort correctly with the 2D sprites just as if they had been drawn as polygons.