Nvidia wants to build a virtual world of Omniverse. The world’s most powerful AI supercomputer Eos will help

Posted by


Facebook would like to lock its users in the virtual reality of Metaverse, but more and more global players are looking in a slightly different direction.

In the shadow of a giant social network, technology is being born that it intends to create digital copy of practically everything – starting with the simulation of physics, chemistry and biochemistry at the lowest level and ending with the digital twin of the whole planet on a macroscopic scale – Earth 2.

See what Omniverse can do:

Of course, we are talking about the Nvidia Omniverse ecosystem, which wants to be a kind of interconnected internet platform for universal simulation and VR. A simulation that runs on a powerful local server, but also somewhere in the cloud.

The first signs back a few years ago predicted only a basic interconnection service, with which a bunch of graphic designers at home office will design and draw any vase, but last year large corporations such as Ercisson or BMW also joined.

Click for larger imageClick for larger imageClick for larger image
In Omniverse, for example, the climate digital twin of the planet Earth-2 is running. That is, twins, there are a lot of them running in different configurations. And there is a tornado simulation

Instead of vases, however, they began to visualize and simulate several more complex pieces. For example, the spread of a mobile signal in the virtual development of no less virtual cities, or virtual factories for cars and warehouses in which robots roam.

Why render games only when we can render factories

Credible physics, ray tracing – we know all this from computer games, and one day in Nvidia they just said that if game studios are building CGI worlds for their blockbusters, why shouldn’t such a virtual and lasting polygon of grass can be built by a credible world? and any corporate.

Click for larger image
Amazon Robotics uses Omniverse to simulate its giant warehouses

Why? Because the photorealistic and physically-believable simulation of the new Amazon warehouse will not replace any classic plan on paper and in CAD. The designers will be able to try out all possible stress scenarios in advance and only then, for example, somewhere in Poland they will actually kick the ground.

Omniverse as a universal simulation platform

But this is just the beginning. Omniverse is gradually beginning to support key engineering software vendors – such as Bentley – so you can easily transfer finished models to its standardized VR today, from any bolt to the whole house.

Click for larger imageClick for larger imageClick for larger image
At Omniverse, for example, BMW and Siemens are simulating wind farms. All this also thanks to the credible models of physics that we have used so far mainly in computer games

Then all you have to do is connect machine learning technologies to Omniverse, and a virtual villa somewhere in the virtual suburbs of the virtual metropolis will start to drive, for example, your prototype of an equally virtual robotic vacuum cleaner.

And since we are in VR, where time can flow at a slightly higher speed, as the GPU and CPU farm will allow somewhere in the data center or at your corporate supercomputer, a hypothetical model of a new vacuum cleaner BobíkLux will be done in a fraction of the time. Simply because they try to vacuum 8,356,452 variants of virtual rooms in Omniverse with 882,725 variants of carpets, tiles and vinyl floors.

Virtualized flabless of tomorrow

Today, if such ARM, Nordic, and many others develop chips without a single integrated circuit factory — factory production in Southeast Asia will take care of the final production — Omniverse can translate this (fabricless) fabless principle into much more complex scenarios.

Click for larger image
The Omniverse simulation and VR platform is slowly but surely building its ecosystem

In short, after your vacuum cleaner meets all expectations, send its designs through Omnivers to somewhere in Asia, and in a few weeks you will receive a box with a prototype that will work on the first good.

Omniverse will need extreme computing power so that it can really work one day and not just end up with pleasing case studies from a few brands. Fortunately, Nvidia doesn’t make pots, but damn powerful chips, so its boss, Jensen Huang, was able to impress the audience with a whole range of new data center iron at GTC’s introductory keynote on Tuesday.

Jensen Huang’s introductory keynote on GTC:

4 FLOPS H100 GPU

It is based on new GPUs for machine learning and neural networks H100 with Hopper architecture. The chip is available 80 billion transistorswhich flows through 4.9 terabytes of AI data per second – all those virtual vases, cities and robotic vacuum cleaners from the world of Omniverse. All at a computing speed of up to 4 PFLOPS – 4 billiard AI operations with 8bit real numbers per second.

Click for larger imageClick for larger image
New graphics / AI chip for use in data centers

A few years ago, similar mainframe supercomputers achieved similar performance in classical mathematical calculations (not AI), and now one specialized GPU can do it. Of course, this is not enough for a comprehensive simulation of Omniversa.

Click for larger image
H100 graphics in a complete design

DGX H100 cube with 32 PFLOPS

What if we combined several H100s on one server module? That’s an idea! Suddenly, a new generation of the Nvidia supercomputer appears before our eyes DGX H100. Contains 8 GPU H100reads 640 billion transistors, 32 PFLOPS performance for calculations from rank AI, has available 640 GB of memory and peak memory throughput 24 TB / s.

Click for larger image
Put 8 H100 graphics in the box and you have a DGX H100 supercomputer module

Nvidia DGX SuperPod with 1 EFLOPS

You can draw and simulate more than a small amount of vases and other junk on something like that, but don’t you have anything bigger there, Mr. Jensen? But of course it is!

Click for larger image
Connect several DGX H100s and create a DGX POD. Connect several PODs and you will get a SuperPOD

Stack the DGX cubes on top of each other, connect them with fast optics and voila, the supercomputer has materialized before our eyes DGX SuperPOD with 20 TB of memory and performance 1 EFLOPS for machine learning and neural networks. It is please one trillion AI operations per second.

Nvidia Eos with 18.4 EFLOPS

And what if… Yes, right, what if we put several SuperPODs next to each other? Well, we finally get it with a little exaggeration the brain of everything and a new reference machine for artificial intelligence – Nvidia Eos.

It will be the most powerful AI supercomputer in the world with respectable performance 18.4 EFLOPS18.4 trillion AI operations per second and in these tasks he will surpass 4 times the fastest general supercomputer in the world, the Japanese Fugaku.

Click for larger image
And this is finally the result. The most powerful AI supercomputer in the world Nvidia Eos. So far only on paper, but the engineers will materialize it during the year

Eos will be built during this year and will make it up 576 DGX H100 cubes together with 4,608 DGX H100 graphics. The whole system connects together a fast interface with respectable data throughput 230 TB / s. That, after recalculation, does some 1.8 petabits per second.

Just to give you an idea, the global Internet was estimated to have flowed last year just 786 terabits per second. Nvidia Eos tightens more than double, which is already a pretty good foundation.

Click for larger image
SuperPOD coming to your data center soon (AWS, Azure, Google, Oracle, and more)

According to Nvidia, the new supercomputer will become a reference machine for the further development of AI, however, Omniverse will also need smaller dedicated platforms for virtualization, even for small business worlds. And it is precisely these digital twins that another new OVX server architecture will focus on, again in the design of the server, the Pod and the SuperPod. Lockheed Martin, for example, is already simulating his own Omniverse in it.

NVLink C2C allows you to assemble custom processors

Nvidia revealed much more during the first day of the conference. Server graphics are complemented by even more improved Grace CPUs with up to 144 ARM Neover arms (ARM v9) connected through the interface NVLink-C2C.

Click for larger image
Grace SuperCPU with 144 Neoverse weapon cores

Nvidia will also allow NVLink to connect other specialized third-party cores to build custom processors. NVLink C2C has great potential for this, it is 25 times more energy efficient and 90 times more space efficient than fifth generation PCIe.

Click for larger imageClick for larger image
NVLink C2C (Chip-to-Chip, Die-to-Die) interface allows you to create specialized processors from a stack of sub-cores / IPs, including those from a third party

So whatever the Omniverse turns out to be, of all those -version around which the bag has literally torn in recent years is by far the most promising.

Behind him is an entity that created hardware for him on the one hand, and on the other hand managed to sell it to the first corporate prospects, which is by far the most important thing.



Source link

Leave a Reply

Your email address will not be published.