c++csharpthreejswebsocketkinectopenglhackathon

Lattice

Distributed multi-sensor volumetric capture system reconstructing navigable 3D point clouds for desktop, web, and HoloLens — shortlisted for Grand Prize at TreeHacks 2026.

Overview

Lattice is a real-time volumetric capture + playback platform that turns physical moments into saveable, navigable 6D media built at TreeHacks 2026, where it was shortlisted for the Grand Prize. You can move around and interact with an object in front of you, not a video on a screen.

Lattice connects capture, reconstruction, data transport, and interaction all into one continuous pipeline:

  1. Multi-sensor ingest
  2. Calibration (Orthogonal Procrustes Alignment) and fusion [1],[5], optional geometric (ICP) refinement [2]
  3. Temporal buffering + indexing (live + recorded timelines)
  4. Compression/chunking for streaming/recording [7]
  5. Rendering outputs including Gaussian-splats [8]
  6. Multi-endpoint playback (AR, VR, desktop/TV, mobile/web)

On immersive devices like the HoloLens 2 (AR), the user’s interaction with the app is spatial and gesture-native, built into the Lattice app for HoloLens 2. However, on flat displays, the Ultraleap Leap Motion adds direct hand-based manipulation so volumetric content remains fully interactive even outside AR/VR.

Inspiration

For nearly two centuries, cameras have helped us record history, but they’ve always flattened it. Even with monumental improvements in the recording quality (Higher resolution, 360°, spatial audio), videos still force you to stand where the camera stood. You can watch the past, but you can’t explore it, question it, inspect it, interact with it, or truly step back into it.

Lattice treats recording time and history as a spatial endeavour. It captures moments as full 3D environments that can be revisited, navigated, and projected back into the world through holograms and 3D tracking. Instead of replaying footage, you can relive events and explore them as a navigable space.

Our goal isn’t just better media. It’s building a long-term system for preserving reality itself, so future generations can experience moments in history, science, and everyday life as places, not recordings.

Key Capabilities

  1. Live streaming of holographic capture to any screen
  1. Capture once, replay anywhere
  1. Markups/Annotations
  1. Instant replay of last 30 seconds
  1. Temporal difference comparison (Git diffs for reality)
  1. Separate interactions for streaming endpoints

How we Built it

Core Hardware:

System Architecture (Distributed Client-Server Architecture):

Lattice uses the Kinects for their LiDAR + RGB capture capabilities [6]. Three sensors send their data through associated instances of the client app to a central server app that calibrates and merges the data into a single 3D point-cloud scene [3],[4].

Calibration + Reconstruction:

Tech Stack:

Challenges we ran into

Use cases

A few examples:

Accomplishments that we are proud of

What we learned

Future Developments

Short term (coming soon):

References

[1] P. H. Schönemann, “A Generalized Solution of the Orthogonal Procrustes Problem,” Psychometrika, vol. 31, no. 1, pp. 1–10, 1966, doi: 10.1007/BF02289451.

[2] P. J. Besl and N. D. McKay, “A Method for Registration of 3-D Shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239–256, 1992, doi: 10.1109/34.121791.

[3] B. Curless and M. Levoy, “A Volumetric Method for Building Complex Models from Range Images,” in Proc. SIGGRAPH, 1996, pp. 303–312, doi: 10.1145/237170.237269.

[4] R. B. Rusu and S. Cousins, “3D is here: Point Cloud Library (PCL),” in 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–4, doi: 10.1109/ICRA.2011.5980567.

[5] N. Garcia-D’Urso, B. Sanchez-Sos, J. Azorin-Lopez, A. Fuster-Guillo, A. Macia-Lillo, and H. Mora-Mora, “Marker-Based Extrinsic Calibration Method for Accurate Multi-Camera 3D Reconstruction,” arXiv:2505.02539, 2025.

[6] L. Yang, L. Zhang, H. Dong, A. Alelaiwi, and A. El Saddik, “Evaluating and Improving the Depth Accuracy of Kinect for Windows v2,” arXiv:2212.13844, 2022.

[7] A. Chen, S. Mao, Z. Li, M. Xu, H. Zhang, D. Niyato, and Z. Han, “An Introduction to Point Cloud Compression Standards,” GetMobile, vol. 27, no. 1, Mar. 2023.

[8] B. Kerbl, G. Kopanas, T. Leimkühler, and G. Drettakis, “3D Gaussian Splatting for Real-Time Radiance Field Rendering,” arXiv:2308.04079, 2023