Omniverse APIs, USD format
3 RTX A6000 to run content in our distributed prototyping space (prototype) + servers
3 RTX A6000 GPUs for the sphere (implementation) + servers
Rendering machines as building blocks
Architectural design
Large scale projection prototyping
Manufacturing
Full surround content authoring
Scientific interactive visualization
Situation room prototyping
Immersive Collaboration:
Open source software
Distributed projection with Omniverse on many other spaces.
Simplified development of the distributed application
AlloLib
Laptop prototyping → Deploy on distributed system without modification.
Built-in Interactivity, Sound and Multimedia
Omnistereo and SW warping and blending in Omniverse
Pushing the boundary for low-latency interactivity in Omniverse
Dynamic offloading for Performance bottlenecks: trading computation for network bandwidth
Best-effort broadcasting and failure recovery for connectionless communication.
Research
Multi-user head and gaze tracking.
Procedural audio from USD scenes.
Built-in Interactivity, Sound and Multimedia
Key faculty, staff, and graduate student researchers associated with the project: Professor Dr. JoAnn Kuchera-Morin, Dr. Kon Hyong Kim, & Dr. Timothy Woods
This project was funded by a grant from the Omniverse division of NVIDIA