NVIDIA and Lenovo at SIGGRAPH: Video Games and Reality Could Be Forever Changed

This month, NVIDIA’s CEO Jensen Huang again delivered the keynote at ACM SIGGRAPH (it’s worth watching), the premier conference for computing graphics and interactive techniques. Lenovo was there to announce its support with impressive new hardware offerings, many of which take advantage of NVIDIA’s technology. But the major news was that the Pixar technology to create metaverse objects is moving to a standard supported by the newly formed Alliance for Open Universal Scene Description (AOUSD) which should be a game changer. 

NVIDIA is setting the market on fire with its AI success as it works aggressively to make existing entertainment technology obsolete and move us into the age of the metaverse and AI. This morning, I was on a panel discussion, Industry 5.0. The panel spent the end of the presentation talking about the metaverse, AI and electronic immortality. I talked about NVIDIA’s latest USD announcements and Huang’s ACM SIGGRAPH keynote. 

Let’s talk about the future of electronic games, entertainment and the digitization of reality. 

Gaming

I’m a gamer, but one of the issues with video gaming is that your avatar must be created for every game and rarely looks anything like you. This creates two problems. First, since your identity and appearance changes for every game, it’s much harder to identify with the character and harder for others to connect the character, assuming you want them to, to the person they previously played with. The other problem is that scenes often must be created from scrap even if they emulate real places. 

USD creates the promise of a consistent avatar that may or may not look anything like you but would look the same in every game in the future. This way, even if the avatar looks nothing like you, the people you play with will connect you to that avatar and allow you to create a more pervasive presence across all the games you play.

In addition, once elements in the game are created, they can be shared by others. This shortens the time-to-market and increases the overall realism of the game. Highly detailed objects are expensive to create so providing broad utility for them, even if they are 3D scanned from real objects, is good so they can be reused to make more realistic environments more cheaply.  

Entertainment

In the U.S., we are currently experiencing a writers and actors strike. Because they are no longer incurring the related costs of creating content, studios are enjoying huge savings. However, that revenue will slowly decline as existing content is consumed and no new content is being created. 

But the studios are looking at using AI and the metaverse to create content inexpensively, much as they create less expensive content today by producing it in other countries to avoid union backlash. 

The result should be more inexpensive content created with fewer and fewer humans involved, allowing studios to ramp content back up without taking as significant a hit as they would if they agreed to the union demands. In short, AI and the metaverse should allow studios to benefit from the strikes. 

This is not just good news for the studios. Writers and actors could create their own content with this technology. A number of actors have already done this. This would allow a far larger number of creators to create unique projects that they want to do, rather than having to do projects they don’t care for to maintain their ranking and income. 

Regardless of how this ends, entertainment will forever be changed. 

Digitized reality and digital immortality

The problem with the metaverse is the lack of immersion. Whether we are talking about gaming, working or learning remotely, experiencing simulations or creating digital immortality, immersion is lacking. But if you can create more realistic objects and blend reality with high visual accuracy with the metaverse, you can create more realistic virtual offices, schools, remote locations (explore mars without going there as Microsoft once showcased with HoloLens) and create an environment in which you could place a digital twin of a human. Because, at least initially, digital immortality will need to be created in the metaverse because our ability to transition a living entity from the metaverse to the real world is decades away. 

We talked about this today on the World Talent Economy Forum and while there was some disagreement on whether the digital immortal that was created could be differentiated from you, the potential for this transformative technology was supported. As we combine NVIDIA’s Omniverse, human digital twins, and these new USD-based elements, we have the potential to not only create digital immortals but simulations that can explore past and future events visually in real time and provide a place for these increasingly accurate human digital twins to reside. 

The question to noodle on is: If you could create a virtual world that was your ideal world with a human digital twin that was your customized ideal companion, would you ever leave it? 

Wrapping up:

With the advancements announced by companies like NVIDIA and Lenovo at ACM SIGGRAPH, we have the potential to change gaming, entertainment and even our reality over the planning horizon. While full immersion, where the metaverse could be indistinguishable from the real world to a user, is still some time off, our ability to create simulations, more consistent gaming experiences and entertainment that is both more realistic and less expensive has never advanced so fast. 

This year at ACM SIGGRAPH, I saw the future and it looked amazing.