Imagine Martha, an octogenarian who lives independently and uses a wheelchair. All objects in her home are digitally catalogued; all sensors and the devices that control objects have been Internet-enabled; and a digital map of her home has been merged with the object map. As Martha moves from her bedroom to the kitchen, the lights switch on, and the ambient temperature adjusts. The chair will slow if her cat crosses her path. When she reaches the kitchen, the table moves to improve her access to the refrigerator and stove, then moves back when she is ready to eat. Later, if she begins to fall when getting into bed, her furniture shifts to protect her, and an alert goes to her son and the local monitoring station.
The “spatial computing” at the heart of this scene is the next step in the ongoing convergence of the physical and digital worlds. It does everything virtual-reality and augmented-reality apps do: digitize objects that connect via the cloud; allow sensors and motors to react to one another; and digitally represent the real world. Then it combines these capabilities with high-fidelity spatial mapping to enable a computer “coordinator” to track and control the movements and interactions of objects as a person navigates through the digital or physical world. Spatial computing will soon bring human-machine and machine-machine interactions to new levels of efficiency in many walks of life, among them industry, health care, transportation and the home. Major companies, including Microsoft and Amazon, are heavily invested in the technology...
Scientific American: Spatial Computing Could Be the Next Big Thing