The ideas is to use Unity Metacast as a real-time 3D sports platform for creating and delivering interactive content, direct to the consumer.
![unity 3d statics unity 3d statics](https://www.edy.es/dev/wp-content/uploads/2013/08/Directional-Lightmap-vs.-Single-Lightmap-exported.png)
The basketball demo shows coaching and analysis that can help coaches explain mistakes or call plays. It shows 3D digital overlays, cuts and shots, camera path generation, and commentary. The rugby demo shows volumetric production tools that integrate with existing broadcast stack.
![unity 3d statics unity 3d statics](https://answers.unity.com/storage/temp/118092-screenshot-from-2018-06-01-162942.png)
You can freeze the action and view it by spinning around the point of view. The MMA example shows a consumer second-screen experience with livestreams, replays viewed from any angle, fan engagement, and fan-produced content. The demos cover MMA, rugby, and basketball, with each one showcasing a different use case of volumetric in the sports world. Unity focuses on making volumetric data look good, into compelling experiences, easily distribute them worldwide, and perform optimally on the 20+ platforms/devices Unity supports. Unity’s value in the volumetric pipeline is by providing tooling and workflows that enable creation and consumption of volumetric content. This allows content creators to use Unity’s tooling and workflows that are optimized for volumetric data, regardless of the capture provider or the distribution mechanism. Unity then takes over from there by providing an entire pipeline – from ingestion to Unity, content development, delivery, rendering, to interactive experience consumption on various devices. It works with partners such as Microsoft, Canon, and other volumetric capture providers who provide the raw volumetric data. Metacast doesn’t get into the actual work of hardware or capture systems. This content can then be viewed from any angle, at any moment in time, giving audiences the ability to see every bead of sweat, blow, takedown and submission, as if they were going toe-to-toe on the famous Octagon canvas themselves. Unity Metacast uses volumetric technology, which encompasses the process of capturing, viewing, and interacting with the real-world, from moving people to static objects, in 3D. We see the first opportunity with UFC, which is perfect for this as it’s a controlled environment in the octagon.” We do the rendering and the editing, and then being able to compute that in real time, push it up in the cloud, and then send it to the broadcasters. This is about five million voxels that we’re pushing down. “This is the first time you’re seeing this level of resolution in three dimensional visuals. “Sports fans love to analyze the action at every level of detail,” said Moore, senior vice president for sports & live entertainment at Unity. Pierre - as they squared off against each other in the ring. How it worksĪs you can see in the video, Unity and its partners scanned used cameras to capture real UFC athletes - Kevin Lee and Georges St. But over time, Moore expects that the quality will just keep on improving. You may notice the tech looks a little rough around the edges, and that’s one of the sacrifices Unity has to make in getting the tech to produce the images in real time. I can change the camera around and be Kevin looking at George.”
![unity 3d statics unity 3d statics](https://i.ytimg.com/vi/7ucjNry31RQ/maxresdefault.jpg)
That’s a first-person view as if I’m Georges looking at Kevin. “What we’re looking at here is volumetric capture about five million voxels per second - a voxel being that three dimensional pixel pushing down the pipe. “When you go into first-person view, you can look at the level of detail,” Moore said as he was showing the online demo. (Moore and Nifty Games CEO Jon Middleton will be talking a lot about sports games at our GamesBeat Summit Next event on November 9-10). But Moore pointed out that the Unity Metacast tech can capture athletes in real-time, as they’re performing in live events, in contrast to the delayed time frames when Intel could deliver its imagery. It’s a bit like the Intel tech, dubbed True View, from 2019.