GTC is NVIDIA’s premier conference. NVIDIA is at the forefront of a number of coming waves ranging from autonomous machines (robots and transportation vehicles), metaverse creation, digital twins, and one of the major drivers for evermore capable AIs. Oh, and NVIDIA’s gaming focus remains strong, so it’s also one of the few firms making our video games much more realistic. What makes the company stand out to me is that it uses its technology throughout the conference keynote so that it is not only a showcase of announcements but shows how the products announced can be used.
Let’s talk about some of the highlights from this year’s keynote.
This was a little depressing for me because I just installed an RTX 3080 in my VR test system. At GTC, NVIDIA announced the 4080/90 which are significantly more powerful than my 3080. Image quality is enhanced, they are a better platform for developing metaverse elements, and the visual capability of these cars is nothing short of amazing. At around $900 for the 4080 and around $1,500 for the 4090, they aren’t insanely expensive.
The quality of games like Flight Simulator has a level of realism to them that is impressive and a little upsetting, upsetting because I don’t have either card, yet.
NVIDIA’s Omniverse platform is the leading development platform for the applied metaverse. Updates now embrace the entire product lifecycle from creation to making. It’s effectively a full 3D production pipeline that can be shared across an organization enabling teams, which are often geographically disbursed, to collaborate on the creation of defined products, buildings, advanced systems like robots, and entertainment media.
This appears to not only be revolutionizing the design and creation of things but media, as well, allowing TV and movie creators of the future to develop high-quality content with a fraction of the budget and staff that otherwise would be required.
Digital twins are a key part of Omniverse that allow the initial modeling and the eventual highly automated management of these systems, systems that encompass advanced factories, smart buildings and cities, and, I expect, battlefields.
GTC showcased an implementation that used AR glasses to allow employees on the ground to blend metaverse elements using Omniverse to help navigate real buildings and do repairs at scale using these next generation tools. As I write this a large hurricane is driving toward the East Coast, and I can imagine a future implementation of Omniverse where first responders can see an overlay of what was, and metaverse creation using drones of what is, in order to identify likely places where people may need rescue. And, after the event, allowing those that have lost their towns to again visualize what once was, so they don’t feel like they lost where they grew up as a result of the disaster.
NVIDIA showcased its GDN (Graphics Delivery Network) that, along with the Omniverse Cloud on AWS, will bring this technology to the world. Backed by NVIDIA’s technology, including uniquely configured and highly focused servers and workstations, this is how many of us will envision the future before it is created. GTC showed how Omniverse was used to create the latest Rimac supercar. A car that will redefine high performance.
Thor Takes Over Autonomous Cars, And NVIDIA Does Robots
NVIDIA Drive Thor just replaced the prior autonomous driving platform which utterly outperforms the prior technology. The platform can run QNX, Linux, and Android simultaneously and covers all of the processing needs of the car, from driving to entertainment. Much like cloud implementations, these functions can be kept virtually separate and secure. Drive SIM, NVIDIA’s training simulator for autonomous driving, has been significantly enhanced to create simulation scenarios on a global scale. Using a Neural Reconstruction engine, simulations can be infinitely modified to take into account even the most unlikely events, like a sudden snow storm in Florida (hey it could happen). Watching these simulations is amazing because they increasingly look like real roads and cities, and I wonder how long it will be until this tool will be used to present evidence in courtrooms or to convince city administrators to fix endemic traffic and safety issues.
It’s already being used to design better automotive cockpit controls and vehicle entertainment systems. How often is the same technology provider engaged in all parts of the creation of something as complex as an autonomous car? This integration into all parts of the creation and operation of future cars should lead to faster advances, higher quality products, and far fewer mistakes like the old Pontiac Aztec.
NVIDIA Drive Orin is the brain of NVIDIA’s autonomous vehicle effort, with NVIDIA Jetson being the variant that is targeted at autonomous robots. NVIDIA Jetson Orin Nano platform based on the Orin SoC for the Isaac platform (there must be a lot of Sci-Fi fans, given the product names, at NVIDIA) was also announced. The most interesting part is the application of this technology in medical Instruments that are software-defined and powered by AI. This should significantly improve the quality of diagnostics from these systems, particularly those that can use Computer Vision and increasingly be used for surgical robots which will be more precise, make fewer mistakes, and be more reliable, particularly in remote areas where qualified specialists aren’t available. AMRs, or Autonomous Mobile Robots, will revolutionize last-mile delivery, warehouse operations and provide help for the disabled.
NVIDIA Triton is at the heart of much of the world’s HPC efforts. When it comes to finding patterns and relationships, according to NVIDIA, NVIDIA Triton is the preferred product which uses Deep Learning and the leading frameworks. The implementation that caught my attention is real-time image processing for live video. For those of us that stream, this means that we’ll always have perfect lighting and look our best even if we didn’t get much sleep and can’t afford a makeup artist (or they didn’t show up).
NVIDIA’s CUQUANTUM, in use by leading quantum developers, including IBM, provides improved development resources as the market moves ever closer to quantum supremacy. There was a long list of related tools that NVIDIA launched to address the coming waves of HPC-level AI and quantum computing, again showcasing that NVIDIA remains on the cutting edge for development tools and platforms for these coming disruptive computing waves, like language models to create ever more capable conversational computers and personal AI assistants that will increasingly redefine our lives. Chatbots become more aware, digital assistants more capable, and the emergence of digital companions, particularly for those working in isolation or those of us that have outlived family and friends, are advancing at an impressive rate.
One of the more interesting announcements was NVIDIA BioNeMo targeted at hugely reducing the time needed to create future anti-virus responses and more effective cures and treatments for the diseases that plague us.
These are the utilities that push customers to the products and services they should be most interested in. When they work properly, they massively increase conversion rates and revenues because, rather than having to try to convince someone to buy something they don’t want, they push people towards products they already like. NVIDIA’s Grace Hopper platform is increasingly favored for these systems with far greater potential accuracy and greater scalability. Interestingly, the Grace CPU, which is unique to NVIDIA, is at the heart of this solution and, I expect, will eventually break out of this implementation to do other interesting things in the future.
Toward the end of the keynote, we were treated to interactive avatars. These could be cartoonish or photorealistic and can increasingly interact with users as if they were humans. Shifting languages on demand and still looking like they are speaking directly to you and being responsive to an ever-increasing breadth of questions using facial and hand expressions which eventually will be indistinguishable from real people. Deloitte was announced as the NVIDIA customer that is aggressively bringing this technology to market.
Fully simulated worlds, ever more capable AIs, recommenders that work, advances in HPC computers and quantum computing training, massive improvements in autonomous vehicles and robots, and a host of tools that will redefine how computers interact with us. Woof. It was like watching the next generation of the computer industry. I doubt any of us have more than the beginning of an understanding of how significantly this will change how we work, how we live, and how we interact with both the real world and the metaverse that will increasingly be indistinguishable from it. At GTC this year I again saw the future and, to be honest, I’m a bit overwhelmed. The future is coming for us, but at NVIDIA it is already here, and the rest of us are struggling to catch up. If you want to see the future, watch the GTC keynote this year.