Embody is a social VR experience that uses visual metaphor and encouragement from teachers and friends to bring about coordinated body movement. Reclaiming the body’s potential inside the digital landscape, this experience is piloted entirely by body movement and position.

The project, a collaboration between MAP Design Lab and lululemon’s Whitespace team, was featured in the Sundance Film Festival’s 2019 New Frontier program. Glowbox provided technical direction and software development.

Without the need for controllers, trackers or markers, participants can move unencumbered from pose to pose and fluidly from scene to scene.

The system combines machine learning, computer vision, inside out tracking, pressure sensitive mats and directional audio to create a seamless experience that connects the audience with their body and the environment through movement.

It’s a critical moment culturally, with the advent of spatial computing and machine learning, for us to put the body back at the center of the potential relationship between technology and humans.
— Melissa Painter

One of the goals and technical challenges of this project was to put the participant’s feeling of embodiment at the center of the experience. Since VR is often an “out of body” experience, we wanted to create an experience that embraced being in your body and makes you want to move and breathe naturally.

With this in mind, we needed a low threshold for entry into the virtual reality experience and wanted to create as seamless a transition as possible from the physical world to the virtual space. We accomplished this by doing all the sensing “off the body” and activating the physical space with dynamic lighting. As such, the experience does not require controllers, suits, backpacks, markers or anything on the participant’s body besides the VR headset.

The VR experience is built using Unity, and we made extensive use of its compute shaders, entity component system, post processing stack and timeline features.

The pose detection system is built in Python and Tensorflow, integrated with a ZED camera. The camera is stereoscopic, allowing us to project the 2D pose into space and get approximated 3D pose data. The 2D and 3D pose is broadcast to the Unity app.

The pressure mat is used as a ground truth for the pose, and the pressure data is processed in Python and OpenCV and broadcast to Unity.

Each sensing system (pose, pressure, position) has its own coordinate system. We built a calibration workflow in Unity that creates a unified coordinate system which is used to drive the experience.

MAP Design Lab

Experience Strategy
Technical Direction
Software Development
Systems Design

Tools & Technologies
Windows Mixed Reality

Media Coverage