Recently the University had an open campus day. Every lab opens their doors to the public, and introduces and demonstrates the work that they are doing.
My contribution for our lab was a mixed reality demo of our lab—a project that brings together spatial computing, real-time networking, and collaborative VR.
The demo begins with a photogrammetry scan of the lab space, captured the day before using high-resolution imagery. This scan is reconstructed into a 3D model that serves as the foundation for the experience. On top of this static model, real-time pose tracking data from multiple Azure Kinect cameras is overlaid. This enables live tracking of people and objects in the space, visualized as avatars or skeletal meshes in the virtual environment.
What makes this system particularly exciting is its networked nature. The entire scene—live-tracked movements layered onto the photogrammetric model—is transmitted via a cloud server to multiple headsets. This allows users wearing Quest 3 HMDs to join the scene simultaneously, no matter where they are, and interact with each other in the shared virtual representation of the lab.
The core technologies behind the project include:
Azure Kinect for multi-camera pose tracking
Unity as the engine for integrating and rendering the environment
Monobit Revolution Server (MRS) for networking
Meta Quest 3 for immersive VR viewing
AWS EC2 instance for cloud-based networking
This setup is a step toward persistent, shared XR spaces—where digital overlays and real-world activity are seamlessly synchronized and shared. The possibilities for remote collaboration, design review, and hybrid working environments are immense.
3D Scan of lab for open campus using LIDAR (left), and picture of actual scene (right)