Ever since I started playing video games on my beloved Atari 2600 Jr., I have never been content to simply play video games. Playing them is fine and all, but with almost every game I have ever played, I know I've thought at some point, Â“how does that work?Â” How do the animations connect with the actions I've orchestrated via the controller or mouse and keyboard? What went through the art director's mind when an awesome visual sequence is witnessed--or perhaps even a bad one?Since inquiring minds demand to know, I decided to get some answers, and share them with my fellow Shackers as well. I had a chat with Mohammad Davoudian, CEO and Creative Director at Brain Zoo Studios, a studio that has handled projects for companies such as Activision, LucasArts, Microsoft, Midway Games, and the Jim Henson Company, to learn more about how video game animation works.
Shack: What kind of background, educational or otherwise, would one need in order to attain a position as an animator at a video game company?
Mohammad Davoudian: Well, Brain Zoo Studios is an animation studio with a long history of animating trailers and cut scenes for the video game industry. But I think across the board, talent is talent, whatever field you are in, and that's what employers are going to be looking for. Raw talent is what we look for more than anything else at Brain Zoo - anyone can recreate models or follow a lighting technique, but adding your own input and thinking about new ways of doing things is the most valuable trait any animator can have.
Shack: On average, how many different animations are there in a single video game these days?
Mohammad Davoudian: That definitely depends on the genre or game. For example, we've done projects like Steel Horizon from Konami that only required half a minute of animation for the opening scene, and then on the other hand we've worked on titles like Darkwatch and Freaky Flyers that required different styles and different scenes needed, from in-game cut scenes to full fledged trailers you'd see in a movie theater.
Shack: What are the benefits of using something like motion capturing ("MOCAP") over more "traditional" animation techniques? After all, most technology seems to have progressed to the point that MOCAP isn't really necessary any longer to capture ultra degrees of realism.
Mohammad Davoudian: That's a good question, but we definitely still use MOCAP (Mocap, for those that don't know, is when we strategically place digital sensors on an object, like a human, and "record" their movements using companies like House of Moves or data that comes directly from the client. From there we feed that data into software that creates a 3d wire image frame of the object, and from there, we add the necessary layers and details to make it into what we want the end result to look like.)MOCAP gets you about seventy to eighty percent to where you want to go with your final product; it provides the nuances that animators need in movement, so it saves a lot of time filling in the details. However, you still have to clean the data and re-time it in order to allow it to fit into the exact position/timing you want/need.
Shack: On average, what sort of work goes into creating a single animated character?
Mohammad Davoudian: Well, the first thing to start with is great character design. Look at Jericho from Darkwatch, you look at him and you can tell immediately what he's thinking, it makes it easier to conceptualize their personality and behavior: what you animate stems from there. The actual action that takes place is a combination of laying the movement down and painting the high res model while concurrently storyboarding their actions.Then, once we decide what their audio is going to be (talking, angry, et c.) we create a 3d character model in a 3d package (Maya, for instance.) The polygons (which decide how detailed your character will look) are decided based on how detailed the character needs to be - if it's a close up shot or a feature film character, it will require many more polygons than a handheld or smaller screen. As for the actual illustration, there's two different ways to illustrate: the first is the standard which is in Maya or similar 3D package. But there's also Z-brush, which allows us to "paint on" detail to the character. Whichever one we pick will depend on which part we're animating. From there you create the models and textures, starting with your wire frame image, using a 3D program you add the texturing, shading, etc. This is where we add cloth, metals, skin or other applicable textures. As this is taking place, in another department of the studio, we're setting up the "rigging" which will in a sense "drive" the model/environments we've built. This means we set the motion path, set the mouth path for the words or sounds the character is making, etc. So it's all being put together at the same time, in different parts: While that high res model is being completed in one part of the studio, another animator will take the lo-resolution model and plug that in to any MOCAP data or Key Framing, depending on the situation of the character. At the end, it all renders out together based on the data we've plugged in: the high resolution model will meet the "rigging" and merge to make it one seamless moving image.
Shack: What about a single level environment?
Mohammad Davoudian: ItÂ’s essentially the same process Â– you need to start with great design and end with an animator who understands the project and can get it done as efficiently as possible with great results.
Shack: Do the aforementioned steps differ from one form of technology (e.g., key frame and MOCAP) to another? If so, how?
Mohammad Davoudian: ItÂ’s essentially the same process; it just depends on re-timing versus re-framing the guesswork of the key framing. With key framing, itÂ’s setting distinct animation points (point A to point B) and letting the computer fill in the work from there. The details have to be added in by the animator to make sure it all looks right, because sometimes computers miss a step or maybe misinterpret what exactly should happen in the movement. With MOCAP, you get all of the details, but have to spend the time re-timing it for what you want in the end.Either way, itÂ’s just how you want to spend your time.
Shack: How do environments "connect?" For example, let's say I travel from room A to B in a given level. What goes into making this a seamless process?
Mohammad Davoudian: Well, in animation it's pre-rendered, so it's a little different. But the next-gen consoles have the power to render in the game. Eventually everything is going to be real-time, and youÂ’re able to create these giant environments using environment Â“fogÂ” (files that load as your avatar approaches each change in environment.) What youÂ’re seeing now is sort of akin to the start of Â“adaptive cinematicsÂ” where the cut scenes differ depending on the information thatÂ’s being read by the engine about the status of a game/character from within the game. That way youÂ’re seeing a cut scene in one point of one game, that may be different each time depending on the situation (low life points, high life points, etc.) Hence, Â“adaptiveÂ” cinematics.
Shack: I'm always impressed with the type of graphical power that can be sucked out of older console hardware such as the PlayStation 2--especially when this same game is scheduled to release on more powerful platforms such as the Xbox 360 or PC. What goes into making a game graphically stunning for older and newer hardware?
Mohammad Davoudian: If what weÂ’re doing applies to multiple platforms. If, for instance, itÂ’s in real-time, everything is created for the lowest common denominator Â– obviously a 360 has more power than a PS2. So when you develop animation for the specs on a PS2, you end up adding more on top of that for the more developed platform. You canÂ’t work backwards like that in animation without grossly affecting the cost.With Steel HorizonÂ’s intro, it was pre-rendered, so it was a QuickTime film rendered for both PSP and large enough for a movie theater screen: it was just a matter of rendering it out differently for each platform (movie screen and PSP.)
Shack: These days, it's quite hard to tell the difference between an in-game cutscene and a pre-rendered cinematic, given that technology has gotten so advanced. Which is the preferred method from an animation standpoint? Please elaborate.
Mohammad Davoudian: Our preferred method is pre-rendered because you have more control and can get the best from your characters (although the gamers may prefer real-time adapted cinematics.) With real-time cut scenes, youÂ’re limited to the technology youÂ’re working with. Plus, when itÂ’s pre-rendered, you can add so many effects in the animation for cloth, hair, etc. Those features are on a limited level right now with existing engine technology.
Shack: Given the high financial costs of producing and developing a video game, there must be some shortcuts animators can use to keep the process short and speedy. Tell us about some of these, if you can, please.
Mohammad Davoudian: The process for real time vs.. pre-rendered animation is the same, itÂ’s how the game engine reinterprets the animation curve that determines how much animation is lost (when in real time.) Because itÂ’s filtered out, it gets down-resÂ’d and you can lose a lot of details.
Shack: Is there still room in today's 3D-driven world for 2D games and animation? Why or why not?
Mohammad Davoudian: Well, if you take a look at 3D animation today, itÂ’s thriving. Everyone said 3D animation would kill 2D and stop-motion (Tim BurtonÂ’s films.) As it applies to games, people haven't done 2D in a while, but donÂ’t think itÂ’s a dead format, not everything needs to be 3D. Everything would look the same that way.We did Lemony Snicket and Iron Phoenix cinematics, and those were done 2D animation. The client wanted something stylistic, on multiple planes: more storybook than high-end 3D animation, It was more about the art and we made it like a movie comic, which ended up working very well, especially on Lemony Snicket, because itÂ’s known for itÂ’s great art (as well as story.)
Shack: Thanks for your time! Anything else you'd like to add?
Mohammad Davoudian: Well, due to the massive change over to next-gen, weÂ’re seeing an unbalance in the industry as it pertains to work in two industries: we have film animators coming over to games and they donÂ’t understand the differences in resolutions, plus what works best on each of the different platforms. Then you have game industry people coming over to films who donÂ’t understand the change to higher resolutions. That makes finding a trustworthy studio hard to do sometimes. We do both, and have done both well for more than ten years.I think that what weÂ’re seeing with the advance of games and the meshing of both industries is a lot of confusion about how certain projects should be produced. What makes a good animation studio is the ability to distinctly serve those different markets in the correct way. I think we do a good job at that, we have a great staff comprised of both film and games people and I think thatÂ’s a huge factor in our success.