Find millimeter on Facebook

Related Articles

Is Realtime Real? Part 1

Feb 1, 2005 12:00 PM, By S. D. Katz

Realtime Rendering for Games and Movies


      Subscribe in NewsGator Online   Subscribe in Bloglines  


Web-Expanded Version

Related links
For a closer look at the specific shaders written for Dawn, click here.

Half Life 2 is among the latest generation of video game titles that, largely because of breakthroughs in graphic board technology, provide a visual effects experience increasingly comparable to motion pictures.

They come at you from every side, just as quickly your fully automatic weapon cuts them down. In the distance you see the muzzle flash, the late report of the mortar and then burning heat, darkness, and screaming. When you finally open your eyes for the last time, a crimson rain of disorganized limbs and torsos lands around you. Then, you pass out.

Game Over

In the early '90s, when CGI was still new to the motion picture industry, futurists predicted that someday all rendering would be realtime and interactive virtual environments would replace movies. Fourteen years later, we're still not there, but neither are we talking hypothetically.

2004 marked the transition from a gaming era of blocky models and low-resolution surfaces to an era of detailed and richly textured experience. Consider two specific examples: Half Life 2 (shipped in late 2004) and S.T.A.L.K.E.R. Shadow of Chernobyl (due to ship spring 2005, at press time). Both titles represent the first of the new generation of realistic games that are beginning to edge into the realm of motion picture visual effects. And that, for many Millimeter readers, changes the way we think about creating illusions.

This transition is largely attributable to the latest generation of graphic boards, which, aided by the main processor on your graphic workstation, enable more sophisticated realtime rendering. The realtime hardware players behind this revolution are ATI, 3D Labs, and Nvidia, and on the software side, shading languages such as Microsoft's DirectX HLSL, open source OpenGL, and Nvidia's Cg. Developers Unreal and Valve used this new hardware and complementary shading software in their next-generation game engines, and the rest of the industry is not far behind.

Programmability

We have come to expect the raw graphics power to increase every few months, but beginning about three years ago, graphic boards also became smarter. Or, at least, they let artists work smarter by allowing programmers access to the chip. In the past, GPU (graphics processing units) were hard-coded in assembly language, which is limited to a fixed and relatively simple set of graphic routines. Artists had only a few rendering options compared to what Lightwave and 3ds Max artists have had at their fingertips in their materials palette for years. Programmable hardware is the qualitative change that has taken place in the gaming hardware and software industry, allowing artists to approximate the look of Mental Ray and Pixar's Renderman with only a few compromises.

S.T.A.L.K.E.R. Shadow of Chernobyl features a square-mile landscape with detailed buildings and fully dimensional trees, shrubs, and ground cover, a high level of detail that even mid-priced game cards can handle.

Hardware Threesome

There are really only three hardware players in the realtime space — ATI, 3Dlabs, and Nvidia. All want a piece of the lucrative game industry, but Nvidia has clearly staked out the most ambitious vision with its Cg language and Gelato, a full-blown rendering solution that is a hardware-assisted competitor to Mental Ray and Renderman and that debuted at NAB last year. Aimed at full-on motion picture visual effects final renders, Gelato is an incredible development from a leading hardware developer betting on the proposition that game rendering and high-level motion picture effects rendering will merge.

Shading Languages

To understand what is changing in the virtual reality world, you have to look at the code that is responsible for the look of a photo-real game. Today there are two basic shading language leaders: Microsoft's DirectX9 HLSL (High Level Shading Language) and SGI's OpenGL 2.0. Nvidia's Cg, however, is an auxiliary language developed in collaboration with Microsoft. It is compatible with HLSL and the OpenGL API and is intended to allow greater programming flexibility when writing to those APIs as well as various platforms such as Windows, Linux, Mac OS X, and game consoles such as Sony's Playstation. All three solutions have one thing in common: they are intended to write to programmable graphics hardware. In the case of Cg, its capability goes beyond mere shading and can also be used to perform compositing and physical simulations.

Why So Real?

In 2004 shading languages leveraged the ability of hardware to process more information, but what exactly accounts for the impending leap forward in realism? Today, a complete gaming experience must provide photorealistic lighting and surfaces, large geometry data sets, and the ability to simulate dynamics. Graphic cards do this by processing two kinds of shader information: pixel and vertex. Vertex calculations allow for the deformation of models — for instance the flexible elbow joint of a character — while pixel calculations control such things as color, ambience, reflectance, and other surface properties. The latest generation of cards has increased the quantity of vertex and pixel data that can be processed while simultaneously adding new options for both types of shaders. Most important, all the new shaders can be customized by artists, which is responsible for much of the photorealism in next-generation games.

Consider Dawn, Nvidia's game character developed to demonstrate the power of custom shaders. The custom realtime shaders written to achieve subtle variations in her skin and hair produce results that begin to compete with Renderman and Nvidia's own Gelato software.

The graphics card also handles simulation of physical dynamics, typically falling objects, explosions, simple fluid dynamics (waves), wind, and other forces. While some of these behaviors are motion scripts that are invoked by the player's game choices (usually blowing something up), more sophisticated game engines simulate reactions on the fly.

In the past, most of the photorealism achieved in games was accomplished with cheats. Typically, such complex effects as volumetric lighting, soft shadows, caustics, and other phenomena are pre-rendered and applied as texture maps. While cheats are still employed in even the most advanced games, many lighting effects are now calculated in realtime.

Even relatively inexpensive game cards can now render high-level pixel shaders and massive data sets in realtime; a good example being the new S.T.A.L.K.E.R. game from GCS, which features a square-mile landscape (Chernobyl) with detailed buildings and fully dimensional trees, shrubs, and ground cover. Typically, this sort of complexity is faked using texture maps, but GSC has modeled thousands of even the smallest objects. Today's cards have limits, but even a mid-priced game card ($149) can play back S.T.A.L.K.E.R., something that was unthinkable only a couple of years ago.

DirectX 9.0 and HLSL

Introduced by Microsoft in 1995, DirectX is an advanced suite of APIs (application programming interfaces) for Windows. The API allows programmers to access such special hardware features as 3D graphics acceleration, input devices, and audio effects.

HLSL (High Level Shading Language) is Microsoft's programming language for the GPU in DirectX 9.0. It works only in Windows, so as is typical with other Microsoft innovations, it gets high marks for quality, but it makes developers nervous by shutting out other platforms. At the moment, HLSL and OpenGL are the dominant programming languages in the game industry.

OpenGL

Developed by SGI in 1992, OpenGL was the only game in town for realtime rendering for most of the '90s and the realtime language our industry cut its teeth on. It remains the main cross-platform language for realtime graphics and runs on Linux, Mac OS X, Windows, and UNIX, and it is deeply imbedded in the graphics industry. OpenGL is an open standard controlled by the OpenGL Architecture Review Board (ARB). Voting members include 3Dlabs, Apple, ATI, Dell, Evans & Sutherland, HP, IBM, Intel, Matrox, Nvidia, SGI, and Sun.

Over the past several years, individual companies were able to write their own extensions to OpenGL, advancing the toolset and adding new capabilities. Unfortunately, each hardware company created custom code unique to its products. Consequently, rather than have shared open-source routines, individual companies in the industry were continually reinventing the wheel. This is the reason DirectX was able to achieve market share so quickly in a world that OpenGL had formerly dominated. Over the past two years, developers have tried to unify the development of OpenGL, a mission spearheaded by 3Dlabs. Still in wide use on all platforms and most graphics cards, the OpenGL ARB is attempting to maximize development of extensions through closer cooperation of its members.

Nvidia Cg

Cg is a shading language compatible with DirectX HLSL and OpenGL. Although you will still need to know how to code software, Cg is much easier for artist/programmers to use than low-level assembly languages such as DirectX or certain OpenGL extensions. Also significant, Nvidia launched FX Composer 1.0 in late 2002, an even simpler tool with an integrated environment with realtime preview for more visual and artist-friendly shader development. Currently in version 1.6, FX Composer works in HLSL and comes with 120 shader examples to get you started.

RT/shader

There are artists, and there are programmers. A few people do both, but there are not enough of them to fill all the tech jobs in the visual effects industry. This need inspired RT/shader from RTzen. RT/shader is a more complete artist's tool than FX Composer, and it all but eliminates the need to write code. With RT/shader, an artist wires such components as brightness, reflection, and transparency in a node-based tree or wire-agraph. Preview is in realtime, and the results of your “look” are compiled into HLSL or exported as an .fx file, which is compatible with Cg and HLSL. RT/shader creates not only high-level shaders as described above, but also low-level shaders using visually represented code blocks. The product is tightly integrated with Maya and 3ds Max, and it can import geometry from other software file formats, including .3ds, .ase, and .x formats.

Tricks of the Trade

You might ask why ILM and Weta Digital are still rendering frames on massive render farms if they could do everything in realtime. Well, yes, Quake looked spanking, but there are still lots of cheats going on. Aliasing, bit depth, high-level surfaces, and sheer complexity are some of the challenges that are not quite up to motion picture spec in a gaming world. Explosions and other effects are often a combination of real 3D flying debris and prerendered movies, but you may not notice this while avoiding chain saws, battleaxes, claws, and flamethrowers. Viewers tend to be distracted when dismemberment is imminent.

The Future

Games providing broadcast-level realism appeared throughout 2004. This has caused a buzz outside the gaming industry; traditional content creators are seeing virtual worlds that are dangerously close to the visual effects they produce for TV shows and motion pictures. From visual effects artists to game hackers, the opportunity to use game engines for motion picture previz, final renders for broadcast, and realtime moviemaking is on a lot of people's minds.


In the next month's installment, we'll check out how Hollywood is beginning to take a serious look at game technologies and how those who don't take Hollywood seriously are creating movies in realtime.

Share this article




Continue the discussion on “Crosstalk” the Millimeter Forum.


© 2014 NewBay Media, LLC.

Browse Back Issues