Chernobylite blends RPG progression, survival mechanics, exploration, and horror and wraps it all up in an ambitious tale set in the Exclusion Zone.
Chernobylite Review - An Impressive Sci-Fi Tale
Microsoft Flight Simulator still stands as one of the most impressive technical and artistic achievements we’ve seen to date. No matter if you’re playing on PC, Xbox Series X, or Xbox Series S.
Microsoft Flight Simulator Xbox Series X|S Review
Death's Door is a surprise hit from a small team who've poured their heart and souls into this Zelda-like. And it's glorious!
One of the Year's Best? Our Full Death's Door Review!
We sit down with Ripple Effect to talk about Battlefield Portal, the mode that brings classic maps, modes, and factions from 1942, Bad Company 2, and Battlefield 3
Battlefield 2042 Interview - Inside Battlefield Portal
NVIDIA On DLSS, Ray-Tracing, and the Past, Present, and Future of Gaming
Post by KostaAndreadis @ 03:31pm 16/07/21 | Comments
Sitting down with Tony Tamasi, Senior Vice President of Tech Marketing at NVIDIA, to talk about DLSS, ray-tracing, AI, innovation, and where the future is headed.


NVIDIA’s history in the PC graphics space is something that goes all the way back to the dawn of what we simply call the ‘graphics card’. NVIDIA first brought discrete GPUs to the market back in 1999 with the very first GeForce product. A milestone that kicked off decades of innovation.

Fast forward to today and you can see the results of that in NVIDIA’s latest graphics architecture, Ampere. The underlying technology that powers everything from the NVIDIA GeForce RTX 3060 through to the GeForce RTX 3080 Ti. It’s not only the latest evolutionary step for the company, but one firmly planted with an eye towards the future.

AI, if you will.

With DLSS rendering, or Deep Learning Super Sampling, this AI-driven Tensor Core-powered slice of tech presents players with the option to run games with a sizable performance bump and no discernable difference in visual fidelity. Ever since the DLSS 2.0 implementation arrived in Remedy’s Control, it quickly became one of the most talked about tools in the PC development space. From Call of Duty to Rockstar’s Red Dead Redemption II, DLSS has only continued to gain momentum.


Tony Tamasi, Senior VP of Tech Marketing at NVIDIA
Going back to its very beginning, DLSS was created to solve a very real-world problem - real-time ray-tracing performance. But, it was also born from the AI computing boom and the potential that deep-learning presented in the game space. If nothing else it speaks to a tradition that’s in line with NVIDIA’s long history of innovation, from the introduction of CUDA Cores to programmable shaders to the recent rise of NVIDIA Reflex.

Stuff that has made the games we’ve all played in the last decade look and feel as good as they have.

Sitting down with Tony Tamasi, Senior Vice President of Tech Marketing at NVIDIA, an industry veteran whose career includes notable roles at 3dfx Interactive, Silicon Graphics, and Apple Computer we spoke at length about DLSS, ray-tracing, AI, innovation, and where the future is headed.

Ray-Tracing and the Dawn of DLSS




“When we made that leap to ray-tracing, without DLSS it was impractical,” Tony Tamasi tells me. “There just wasn't enough performance to really do what people wanted. You could turn maybe a single ray-tracing effect on, but at that point it’s not that big of a difference. Whereas when you look at something like Control, when you turn everything on it looks radically different.”


“When we made that leap to ray-tracing, without DLSS it was impractical. There just wasn't enough performance to really do what people wanted."



That note about turning everything on was born from NVIDIA’s long-standing relationship with game developers -- it knew there was a desire for high-quality realistic in-game shadows, reflections, and things like Global Illumination (GI). All things that would make the interactive worlds we visit day-to-day feel more realistic and immersive. And it’s only when you combine multiple ray-tracing effects that the, well, effect begins to look like a true glimpse into the future.

Case in point, Cyberpunk 2077’s setting of Night City looks incredible with all RTX effects enabled.



“But, turning everything on means it all runs radically slower,” Tony Tamasi continues. “We needed another kind of generational leap, and that's what DLSS gives you. An architectural leap is oftentimes one and a half to two times the performance. DLSS brings another generation in terms of capability. It makes the impractical, practical. Without DLSS, I'm not sure that we'd have the momentum behind ray-tracing that we have today.”

DLSS 1.0, Turing, and Those First Teething Issues




With the release of Turing back in 2018, that is the architecture that powers the GeForce 20 Series (RTX 2060, RTX 2070, and RTX 2080), it arrived on the scene before real-time ray-tracing started to appear in games. It introduced RT Cores and Tensor Cores, dedicated hardware created to support complex ray-tracing and AI-based applications. In terms of what they did, well, it’s fairly easy to understand the RT Core. Dedicated hardware that helps trace rays of light bouncing around a scene.


“An architectural leap is oftentimes one and a half to two times the performance. DLSS brings another generation in terms of capability. It makes the impractical, practical."



As for Tensor Cores, that's trickier. Mainly on the account of deep-learning still seen as this strange, almost unknowable thing. According to Tony Tamasi, a Tensor Core is “a very high-performance matrix math engine that gives you, roughly speaking, an order of magnitude increase in performance in terms of neural-network processing”. Not that we’re fully aware of what that is, but it sounds cool. And really, it’s what makes ray-tracing possible in a game like Control, Cyberpunk 2077, and Minecraft.

“Even with that big leap forward [in terms of RT Cores] Turing wasn’t fast enough. You can't turn all of those effects on and run a game in 4k at 240 frames-per-second. You can't do that with Ampere either. But that's okay. We've kind of just started with this next-generation of graphics.”


“At NVIDIA we try to solve end-to-end problems, we don't just throw things out there,” Tony says. “We built software, DLSS, that we knew would solve a real problem. It used AI, and the neural network to be able to synthesise high quality graphics and increase performance. We couldn't have done that without the Tensor Core in hardware.”

Related: Minecraft with RTX – Beauty in Blocks




But, like with anything new there were teething issues. Battlefield V, the first game to go RTX On, brought spectacular real-time ray-traced reflections at a cost. Running at a resolution higher than 1080p during those early days wasn’t really an option, and when the first version of DLSS was turned on - the image quality suffered.

“We don't like it when new technology doesn’t go gangbusters out of the gate, but we've gone through this before,” Tony adds. “There are always teething pains and doing something new is always hard. The first version of DLSS was meant to solve the ray-tracing performance problem, but it wasn't good enough. So, we kept working at it and we solved the major problems.”


“The first version required training on a per-game basis,” Tony explains. “And we were willing to make this investment into this enormous data centre, one that would be capturing games, and train and train and train. As it turns out it wasn’t really a computation issue but one of getting game data to train on. That was hard because of the way games are built. They're not what's called ‘deterministic’. You can't get the same exact result running through a game twice because subtle things are different.”


“The first version of DLSS was meant to solve the ray-tracing performance problem, but it wasn't good enough. So, we kept working at it and we solved the major problems."



A result that led to some pretty drastic measures, NVIDIA scrapped the data centre idea and decided to re-write DLSS from scratch. “Deciding to rebuild DLSS entirely off synthetic training meant we could make it deterministic,” Tony recalls. “We built a series of giant synthetic images and synthetic sequences to train DLSS 2.0, and we re-engineered the network. Most of all we kept at it, and the quality now approaches that of native. And in some cases, superior to that of native.”

Even so, NVIDIA is far from finished. “There’s more software engineers writing code than there are hardware engineers writing code in NVIDIA,” Tony adds. “DLSS and technologies like DLSS are critically important to NVIDIA. We have large numbers of researchers and engineers continuously working on them. DLSS has gotten better, but it still isn't as good as we'd like it to be.”

Of CUDA Cores And Pushing the Industry Forward




With the arrival of the PlayStation 5 and Xbox Series X, dedicated hardware-based ray-tracing is now, seemingly, the norm. And with that most of the big AAA titles currently in development will feature some form of ray-tracing. So, in hindsight it makes sense that NVIDIA introduced RT Cores in the Turing generation.


“DLSS and technologies like DLSS are critically important to NVIDIA. We have large numbers of researchers and engineers continuously working on them. DLSS has gotten better, but it still isn't as good as we'd like it to be.”



A move that is in line with NVIDIA’s history. “CUDA was another enormously large bet that NVIDIA took,” Tony Tamasi says. “Back in the day, prior to the invention of CUDA and general-purpose GPU compute, people had been using graphics processors to do computing via graphics-APIs. That was called GP-GPU, using OpenGL or DirectX to do compute things. We knew that there was this huge domain of computational problems beyond graphics, heading into scientific computing and image processing and medical imaging and chemistry.”


“We invented CUDA and designed it around high-performance computing,” Tony continues. “And we could have put CUDA on a single high-end GPU and launched it into a data centre. Let that be the high-performance thing. But we made the choice to put compute capability and CUDA in every GPU we made. Initially, that was an enormously large investment. Adding a cost to every GPU that you make, where until the applications and benefits materialise, NVIDIA is spending billions of dollars to get that market going.”

Related: NVIDIA Reflex Brings a Competitive Edge to Rainbow Six Siege and Apex Legends




“When we launched Turing, we were the first ones with RT Cores and Tensor Cores in hardware, the first to do that kind of stuff,” Tonny adds. “We instinctively knew that was the right way to go. We've been talking to hundreds of developers and they have been giving us feedback for years. And now you can see the rest of the ecosystem joining the party on the ray-tracing front, the new consoles have hardware accelerated ray-tracing. AMD has, with its Navi 2X series. I would expect that Intel will come up with accelerated ray-tracing at some point.”


As for Tensor Cores, with DLSS and RTX Broadcast, both games and content creators are enjoying the benefits of AI right now. So, it might only be a matter of time before other companies start including dedicated AI hardware in their products. “Honestly, I expect everyone is going to have some form of AI acceleration,” Tony says. “I think it's inevitable.”

“It’s like how we made the decision to embed CUDA capability long before there was a gaming application for it,” Tony explains. “When CUDA launched, there were no applications. Now there's like 10,000. If you think about AI being ‘Software 2.0’ it just seems natural that people are going to find applications for this in real-time gaming. DLSS is really the tip of the spear, so to speak, for that. It's probably the first interesting application of neural networks for real-time graphics.”

The Future of RTX




It’s not often that you take stock and realise how far the industry has gone in such a relatively short time. But, it’s not that difficult. Look at the visuals in Super Mario 64 and then look at those in Super Mario Galaxy or Super Mario Odyssey. The original Halo compared to Halo Infinite. Sony’s God of War on the PlayStation 2 to the God of War on PS4. Advances take time, but generation leaps offer monumental differences when viewed a decade or so apart.


“We've been talking to hundreds of developers and they have been giving us feedback for years. And now you can see the rest of the ecosystem joining the party on the ray-tracing front.”



“I've been fortunate enough to be around the graphics industry for quite a while,” Tony tells me. “From the dawn of raster-based fixed functions and then programmable shading. And now, here comes ray-tracing. I think it is the next thing from a graphics perspective, but it's going to take a while. When we talk about rasterization and shaders, each of those lasted about a decade, roughly speaking, before they started to mature.”


“I expect ray-tracing is going to take us a decade or more before it's going to get mature,” Tony continues. “While we've kind of got the fundamental building blocks to do a lot of those cool things, there's a lot more performance and a lot more capability needed.”

Related: NVIDIA GeForce RTX 3080 Ti Review




For ray-tracing, the difference and benefit goes beyond more realistic visuals. It also lies in developers being able to create worlds in an easier fashion than what has come before. But, when it comes to exactly that, developers have gotten really good at faking it. Red Dead Redemption II, Horizon Zero Dawn, God of War, The Last of Us Part II. These are all games that don’t feature ray-traced shadows, reflections, or any other form of ray-traced lighting.


“The double whammy effect is that typically when you're learning that new thing, you're at the point when some other technology is mature,” Tony explains. “With ray-tracing, programmable shading is a decade old or more and people have gotten clever at it. There's lots of neat tricks, you can make screen-space reflections look good if you work around some of its issues. So, not only are you learning this new technology and new architecture and ramping up, but you're competing against the pinnacle of what’s come before.”

According to NVIDIA that’s not a problem. It plans to spend the next decade improving performance and bringing more features into the picture. With the launch of Ampere, ray-traced motion blur became possible. It hasn’t appeared in a game as of yet, but like with reflections, shadows, and global illumination, it’s only a matter of time.

Working With Developers




In the PC space, hardware partnerships are not uncommon - in fact the opposite might be true. When you fire up a game and see an NVIDIA or AMD logo that means more than simple compatibility.


“I expect ray-tracing is going to take us a decade or more before it's going to get mature. While we've kind of got the fundamental building blocks to do a lot of those cool things, there's a lot more performance and a lot more capability needed.”



“There are dozens of game engines that we've worked to integrate technologies in, and with hundreds of developers too,” Tony explains. “And that might mean that we put NVIDIA engineers either onsite, or working with their engineers and developers to help integrate code into their games. That's true with a lot of technologies from NVIDIA.”


“NVIDIA Reflex is the same, we've tried to put Reflex in as many games and in as many game engines that can take advantage of it,” Tony says of NVIDIA’s latency reduction tool, which is now available in just about every competitive shooter on the market, from Fortnite to Apex Legends to Call of Duty and more. “We try to get our technologies as broadly deployed in as many engines as possible, so that as many developers and studios can take advantage of them as possible.”

Related: Cyberpunk 2077 - How DLSS and Ray Tracing Present a Vision of the Future





In the case of ray-tracing and DLSS this also extends to engine developers, where it’s a two-way street in terms of shared knowledge. “We worked with Epic from the very beginning to get ray-tracing into Unreal, for example,” Tony adds. “Epic was super excited too; they were pulling it out of us as much as we were trying to put it in.”


“NVIDIA has always had a pretty special relationship with game and engine developers, and game developers and NVIDIA both want to advance the state of the art,” Tony says. “They've told us for years that they'd love to see more advanced rendering. The nice thing about ray-tracing is that it's based on by and large the laws of physics, with regards to light. That allows developers to solve a bunch of hacks, honestly, that raster-based graphics have had to do.”

On the DLSS front NVIDIA recently developed and launched plugins for the popular Unreal and Unity engines, to ensure that access was available to all studios - no matter how large or small they are. The goal there, of course, is to ensure that enabling beneficial technologies like DLSS is as easy as possible. “A lot of these smaller indie studios have zero or a single graphics programmer,” Tony adds. “Now you can add DLSS with zero to one graphics programmer. And that’s exactly what you want. That's going to open the flood gates in terms of the number of development studios that can take advantage of DLSS.”

The Future of Games as We Know Them




If ray-tracing is only going to get better, with the journey on that front only just begun, one wonders what else might be in-store. For NVIDIA, it firmly believes that deep-learning will lead to new avenues for both development and gameplay. The burgeoning field, like ray-tracing, is still in its infancy when it comes to games but already we’re seeing some pretty exciting things. From NVIDIA’s own GameGAN that was able to create a version of Pac-Man just by watching it being played (someone took this technology and actually got AI to create a version of Grand Theft Auto) to EA’s groundbreaking use of machine-learning to create realistic animation and responsive player movement in the upcoming FIFA 22.


“We try to get our technologies as broadly deployed in as many engines as possible, so that as many developers and studios can take advantage of them as possible.”



“One of the things that happened a little faster than we thought was artificial intelligence,” Tony tells me. “Looking at any deep-learning system or data centre at Google or Facebook or Apple, inside those warehouses full of computers are a bunch of NVIDIA GPUs running CUDA to do neural network processing. That led to an ecosystem of tools and applications and libraries and applications designed around this new generation of computing -- Software 2.0 with Artificial Intelligence. And now we're starting to bring that to graphics and gaming.”


“AI is going to kind of unleash a whole new generation of advances, some of them we haven't even thought about yet,” Tony continues. “DLSS can basically double your performance, but artificial intelligence can be used for a huge range of things. For content creation an artist can paint a sample of a biome, and instead of placing trees and grass for however many square kilometres of terrain, neural-networks can fill all that in. We can use neural networks to synthesise geometry from photogrammetry. We can use neural networks to test games so that you no longer have people testing where you could get stuck or fall through the world. A neural network can have that behaviour.”

“I'm kind of looking forward to the next phase where you start to see AI brought into games where they can impact gameplay,” Tony says. “Games have had AI for a while in the sense that they've had scripted behaviour for NPCs. That’s still hand-coded behaviour, if you attack me, I attack you back. What if a boss learned? What if characters had behaviours that changed? What if a boss fight wasn’t this carefully scripted dance, but rather the boss learns how to fight the player based on their attacks.”


When it comes to enemy AI or “playing against the computer”, this is something we’ve seen in many forms. On the relatable computer versus human chess player front, AI or neural networks weren’t required for computers to get to the point where they could beat the world’s very best. And when it comes to adapting and learning based on player interaction, NVIDIA’s vision isn’t that of games getting harder. Like with ray-tracing, it’s about creating immersion. Believable dynamic environments.


What if characters had behaviours that changed? What if a boss fight wasn’t this carefully scripted dance, but rather the boss learns how to fight the player based on their attacks."



And nothing says dynamic like the world of online competitive or co-operative gaming, from World of Warcraft to Call of Duty Warzone to the likes of Satisfactory and Valheim.

“One of the reasons competitive games have such longevity is that it's human versus human, player versus player,” Tony concludes. “Players inherently adapt, they're smarter, they're unpredictable. What if you had neural networks behind games that were indistinguishable from humans? You would create a whole new generation or class of games, stuff that we can't even really envision really. Games that were truly dynamic and truly alive. Of course, that would require changes in the way we build games, engines, and even how we approach game design. But the capability for that, the groundwork, has been laid. And that’s exciting.”




Latest Comments
No comments currently exist. Be the first to comment!
You must be logged in to post a comment. Log in now!