avatar
NuNet
03-06

Reinforcement learning, motion diffusion models, onboard perception, all running across different compute environments. Training these policies runs on GPU clusters. But the character itself runs inference on embedded hardware in real time, navigating guests and generating motion autonomously That edge-to-cloud coordination, simulation and training in the data center, real-time inference on the robot, data flowing between both, is the compute orchestration challenge that scales with every character you deploy. It's the problem #NuNet is solving: a protocol that coordinates workloads across heterogeneous infrastructure, from cloud GPUs to edge devices. The next wave of physical AI isn't bottlenecked by GPU power. It's bottlenecked by the infrastructure layer that ties it all together.

NVIDIA Robotics
@NVIDIARobotics
03-06
Ever wondered how Disney's characters make it from the screen to reality? 🎬 Behind the magic is physical AI: • Newton & Kamino: Leveraging the open-source, GPU-accelerated Newton framework and Disney’s Kamino solver for artist-centric reinforcement learning. • Expressive
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments