spatial computing (original) (raw)

What is spatial computing?

Spatial computing broadly characterizes the processes and tools used to capture, process and interact with three-dimensional (3D) data. Spatial computing is a technology defined by computers blending data from the world around them in a natural way.

Spatial computing in action could be as simple as controlling the lights when a person walks into a room or as complex as using a network of 3D cameras to model a factory process. Spatial computing concepts also play a role in orchestrating autonomous computing applications in warehouse automation, self-driving cars and supply chain automation.

Users commonly interact with spatial computing applications through virtual reality (VR) headsets that mirror the physical world, or mixed reality devices that overlay data onto a view of the physical world.

Components of spatial computing can include camera sensors, internet of things, digital twins, ambient computing, augmented reality (AR), VR, artificial intelligence (AI) and physical controls. Substantial advancements in these technologies are making spatial computing more practical.

The term spatial computing was coined by Simon Greenwold, who described the concept's importance in his 2003 master's thesis at MIT as "an essential component for making our machines fuller partners in our work and play."

Companies including Apple, Google, Magic Leap, Meta and Microsoft offer spatial computing devices for consumer audiences.

How does spatial computing work?

Spatial computing mirrors how we interact with virtual objects in the real world. Humans translate the two-dimensional images that they see into a 3D model of the world, make sense of objects in the world and then direct their hands to act. For example, when we pour a cup of tea, we watch the cup as we pour, determine when the cup is full and then stop pouring. Spatial computing does the same but with computers, sensors and actuators.

Spatial computing involves the following steps:

  1. Gather data. Spatial mapping techniques are used to gather data around the user and the device's surroundings. Techniques such as photogrammetry, lidar and radar capture a 3D model of the world. Lidar or radar measure the reflection of a laser or radio signal off objects around a scanner to automatically capture a point cloud, which are sets of data points in a space that represent 3D shapes and distances. Photogrammetry, described as the art and science of creating 3D models from photographs, combines imagery from multiple images or cameras. Newer AI techniques can also capture a richer representation using a handful of images.
  2. Analyze the data. Techniques such as machine vision analyze this data to make sense of the imagery. AI techniques help identify individual objects in a scene, their movement and interactions with other objects. For example, they can look for product defects, understand gait walking patterns or analyze how different workers perform a process.
  3. Take action. Handheld controllers, motion sensors and speech recognition help users interact with the device and their surrounding environment. For example, the analysis of collected digital 3D images captured from the physical processes enable a self-driving car to detect a pedestrian in front of it and stop itself in real time. A building control system can adjust the heat or the lights when someone walks into a room in response to their preferences, which are stored in a database.

Spatial computing experienced through a headset uses technology such as AR and VR, internal and external camera sensors, controllers and motion tracking. These headsets gather data on and around the user and their movements as well as analyze and interpret the incoming data to respond accordingly.

Key features and benefits of spatial computing

Spatial computing can improve enterprise processes in the following ways:

Industry use cases for spatial computing

Spatial computing is being used in the following ways:

Examples of spatial computing

The following are examples of spatial computing:

Examples of spatial computing headsets include the following:

Spatial computing can be used as a general term that extends to technologies such as AR, VR and mixed reality.

VR simulates a 3D environment that lets users explore and interact with a virtual surrounding. The environment is created using computer hardware and software delivered through a wearable headset. AR works similarly, but instead of simulating a different environment, it overlays simulations on top of real-world environments.

AR, VR and spatial computing differ in that, in spatial computing, digital simulations can interact with or appear to modify physical environments.

For example, a digital object overlayed on a headset will understand that it's at rest on a real-world table. The user could walk around the table in real space to see the back of the object, pick it up and place it on a real pedestal. The headset should be able to represent a digital object, understand the real-world environment it's in and surrounding real-world objects, and interact with the user or those nearby physical objects.

Spatial computing vs. edge computing

While spatial computing and edge computing sound similar, they refer to different general ideas. Spatial computing blends digital and real-world data in a natural way, whereas edge computing moves data processing closer to the user.

Edge computing is a distributed IT architecture where client data is processed at the periphery of the network, as close to the originating source as possible.

In simplest terms, edge computing moves some portion of storage and compute resources out of the central data center and closer to the source of the data itself. Instead of transmitting raw data to a central data center for processing and analysis, that work is performed where the data is actively generated.

A spatial computing headset could be considered an edge computing device, for example, if the data the sensors pick up is then processed in the headset instead of being sent to a separate device.

Future of spatial computing

Despite the potential benefits of spatial computing, the technology has had limited success. Spatial computing headsets typically have the following three issues:

However, these technologies have made more recent advancements, which makes the experience more practical. For example, the Apple Vision Pro, which some claim has the potential to revolutionize spatial computing, can fit a number of sensors. This makes the experience of moving around with it and using hand gestures smoother and more responsive. Apple Vision Pro uses two main cameras, four downward-facing cameras, two infrared cameras, two TrueDepth cameras, two side cameras and five other sensors.

Learn more about Apple's spatial computing technology and the Apple Vision Pro.

This was last updated in February 2024

Continue Reading About spatial computing

Dig Deeper on CIO strategy