The PC industry is finally making a push toward a “holy grail” rendering technique that makes computer-generated imagery in movies appear so much more lifelike than the graphics in games. At GDC 2018 on Monday, Microsoft introduced a new “DirectX Raytracing” (DXR) feature for Windows 10’s DirectX 12 graphics API.

To coincide with the announcement, Nvidia announced “RTX technology” for enhanced DXR support in upcoming Volta graphics cards, as well as new ray tracing tools for its GameWorks library that can help developers deploy the technology faster. Likewise, AMD said it’s “collaborating with Microsoft to help define, refine and support the future of DirectX 12 and ray tracing.” And top gaming engines like Unity, Unreal, and Frostbite are already planning to integrate DirectX Raytracing.

This is a big deal, in other words. Let’s dig into why real-time ray tracing matters, what’s needed to use DXR features, and why DirectX 12’s ray tracing support might make the games of tomorrow look better than ever. It’s a complex subject that we’ll cover at a very high level. But basically, better lighting, shadows, and reflections mean better graphics overall.

“The result is going to be really stunning,” says Tony Tamasi, Nvidia’s senior VP of content and technology.

Ray tracing vs. rasterization

Ray tracing mimics how lighting works in the real world. Objects are illuminated by 3D light sources, with rays bouncing around before reaching your eyes (or the camera, in games). Light might be reflected by other objects, or look different after passing through water, or be blocked by another object completely and create a shadow. The objects the rays bounce off even affect the final color you see, just like in real life.

Ray tracing can deliver very high-quality images. Just look at the Avengers movies! But there’s a reason the technique is largely limited to films alone: Ray tracing is very computationally expensive. It takes a lot of resources.

real time ray tracing holy grail nvidia Nvidia

Real-time ray tracing has been the holy grail for graphics industry.

Films can spend as long as they need to render a scene in processing; until now, consumer PCs simply couldn’t handle real-time ray tracing in games. Because of that, games are created with a technique called rasterization. Rasterization essentially converts a game’s 3D models into pixels on your 2D screen, then applies the color information after. Each of the pixels are then colored in independently, applying textures and shading with techniques like shadow mapping and screen-space reflection.

Rasterization is much faster than ray tracing—hence its use in real-time games. But rasterization has its drawbacks. Because it needs techniques like shadow mapping to simulate visual effects caused by normal light behavior, such as scattering and refracting, it takes a lot of time and expertise to make sure images look realistic and lighting behaves as expected. “There’s also an upper limit to how realistic a rasterized game can get, since information about 3D objects gets lost every time they get projected onto your 2D screen,” Microsoft says.

Source link


Please enter your comment!
Please enter your name here