Nvidia, last week, announced its Omniverse Avatar Cloud Engine (ACE). This comprises “a collection of cloud-based AI models and services for developers to easily build, customize, and deploy engaging and interactive avatars.” It’s essentially a character creator for chatbots of the future. Remind us, how is this a good thing?
Well, there are some excellent aspects to this. Mostly, the ones that don’t involve inserting digital avatars into your life unnecessarily. A startup could use Nvidia’s Omniverse ACE to create a sign language interpreter as an overlay for YouTube videos. Because Nvidia’s tech is cloud-based, the startup doesn’t need to fork out a huge amount of cash for powerful hardware.
The face of Siri
Nvidia’s Omniverse ACE is based on existing software and models. They’re just all bundled together here. Omniverse handles the company’s AI-powered animation. Metropolis (we’re dropping the ‘Nvidia’) deals with computer vision like object recognition. Merlin, NeMo Megatron (yes, that’s what it’s called), and Riva handle recommenders, natural language models, and AI speech respectively.
“Our industry has been on a decades-long journey teaching computers to communicate and carry out complex tasks with ease that humans take for granted,” said Rev Lebaredian, vice president of Omniverse and simulation technology at Nvidia. “Nvidia ACE brings this within reach. ACE combines many sophisticated AI technologies, allowing developers to create digital assistants that are on a path to pass the Turing test.”
Sure, digital assistants are cool. Giving Google’s Assistant or Siri a face could be on the cards. But that first word in the name is what has caused our eyes to narrow. The unspoken reason Nvidia has made its Omniverse ACE available would be for various metaverse attempts.
Thankfully, we’ve still got a way to go before one of Nvidia’s avatars can pass the Turing test. Who knows, maybe there’s someone out there that can actually use the tech for something useful.