Designed for content creators to capture and live-stream broadcast quality volumetric video into

Unity, Unreal, ARKit and ARCore.

Demo apps

View our volumetric video on your mobile devices

Requires a high powered phone (iPhone 7+, Snapdragon 835+) and a fast internet connection (35 Mpbs+)


Built from the ground up for processing volumetric video at speed, our proprietary compression algorithms use deep learning and a GPU powered architecture to stream content in

Condense Reality software also takes care of the complexities of distribution. Our cloud platform ensures that content can reliably reach a global audience and cope with spikes in demand and variable connection speeds.
Camera Rigs

Modular, portable and versatile. Easy to set up on location.

Our rigs use both depth sensing cameras and Ultra HD/Ultra high FPS machine vision cameras. A deep learning pipeline means you need fewer cameras to achieve high quality output. Our software isolates subjects so there is no need to use green screens.

Content is designed to be streamed into game engines, primarily

Unity and Unreal Engine.

We also support AR Applications, including

ARKit and ARCore.
We provide SDKs to integrate into your application and can also create custom playback/white label solutions.
Brief History

We're just getting started.

Company founded
March 2019
The co-founders coalesced around a shared vision to use cutting edge computer vision research to create a completely new user experience.
Beta product launch
March 2020
After 12 months of development we were able to demonstrate our capability to capture and live-stream volumetric video using off the shelf hardware.
Studio product launch
March 2021
We are on a mission to make volumetric video photo realistic and live-streamed. Our broadcast quality product is being trialed with a select number of studios.


See product options and sign up for a live demo.