What You Need to Know About Universal Scene Description — From One of Its Founding Developers
<p>Can the metaverse be a better ’verse? As the hype around the metaverse as a metaphor for all things futuristic coalesces into concrete efforts to enable the 3D evolution of the internet, it’s a good moment to reassess how both human beings and machine learning can reason about the scale and complexity of the data models and pipelines needed to represent virtual worlds in full fidelity.</p>
<p>A concept from the 1990s, the <a href="https://blogs.nvidia.com/blog/2021/08/10/what-is-the-metaverse/" rel="noopener ugc nofollow" target="_blank">metaverse</a> was first coined by science fiction author Neal Stephenson as a set of connected virtual worlds that extend the physical world. Like the 2D web, there are consumer and industrial uses of the metaverse. The 3D worlds of the metaverse will usher in a new era of design and simulation that unlocks new possibilities for AI and global industries.</p>
<p>So how is the metaverse <a href="https://medium.com/@nvidiaomniverse/plumbing-for-the-metaverse-with-universal-scene-description-usd-856a863d9b12" rel="noopener">evolving from speculative fiction to reality</a>? The metaverse will require a common standard upon which to describe, compose, populate, simulate, and collaborate within 3D worlds. Again like the 2D web, the success of the metaverse’s 3D spatial overlay of the 2D internet will depend on its universal interoperability as governed by open standards and protocols.</p>
<p><a href="https://medium.com/@nvidiaomniverse/what-you-need-to-know-about-universal-scene-description-from-one-of-its-founding-developers-12625e99389a"><strong>Click Here</strong></a></p>