9 May

Indoor Reality

At A Glance

To map the world” is the mission of Indoor Reality, a company which offers innovative solutions to the challenges of mapping the indoors.  Founded by Dr. Avideh Zakhor, who was behind the technology that makes Google Earth possible, Indoor Reality presents an equivalently efficient and mobile method for capturing interactive 3-D models of interior spaces through their wearable data acquisition systems and software pipeline.

 

In-Depth: Mapping the Interior

It is the Golden Age of Armchair Travelling.  Nearly a decade after the launch of Google Street View in the United States, the service has expanded to allow anyone to venture anywhere, from the verge of a volcano in Vanuatu to remote mining towns on the edge of the Arctic circle, all from the comfort of home.  The mobile mapping methodology of Google’s Street View Car which makes the visual experiences of armchair travel possible actually originated from an idea by researchers Avideh Zakhor and Christian Früh at the University of California, Berkeley’s Video and Image Processing Laboratory in the early 2000s. Together they developed a means to rapidly collect data on the outdoor world by mounting laser scanners and cameras on vehicles.  

Google’s Street View Car (Left) and the Ground-Based Data Acquisition System (Right) first developed by Berkeley’s Video and Image Processing Laboratory.

In 2002, they were able to use this technology to create a 3-D model of downtown Berkeley resembling the models you can observe in Google Earth today.  With the assistance of DARPA, they founded the startup Urban Scan to commercialize their technologies, which was consequently acquired by Google in 2007.  With the mapping of the outdoors in the hands of Google, Dr. Zakhor turned her focus into the next uncharted realm: the indoor world.

The 3-D render of downtown Berkley created by fusing data acquired from both ground-based and aerial acquisition methods.  Image retrieved from Berkeley’s EECS Research Projects Page.

Indoor mapping is a field that presents its own unique challenges.  When creating similar 3-D maps of the outdoors, location verification is facilitated by the use of satellites and GPS.  However, the functioning of these technologies becomes greatly limited once you step indoors, complicating the determination of one’s precise location.  Existing means for creating 3-D models of indoor spaces involve using wheeled systems such as pushcarts equipped with sensors and lasers scanners.  These methods are not only time-consuming, but also unfeasible for locations with uneven surfaces, stairways, and tight, narrow areas.

Wearable devices such Indoor Reality’s backpack offers a portable solution for mapping interior spaces.  Image retrieved from a video by Berkeley Engineering.  

In the space meant to be traversed by humans, why not create a wearable device to map and index the interiors?  With this idea in mind, Dr. Zakhor developed both hardware and software for the rapid modeling of indoor spaces and founded Indoor Reality in 2015.  Rather than a pushcart on wheels, Indoor Reality’s mapping technologies take the form of a backpack loaded with sensors and a handheld device which can negotiate even the most challenging of indoor terrains.  These mobile mapping units allow the individual to quickly collect a multitude of data on their surroundings by simply walking through it.  During this walkthrough, the collected data is simultaneously uploaded to the cloud for processing, analysis, and visualization.  The result, often in the form of a point cloud, is combined with the collected imagery and available floorplans to produce an interactive, navigable model of the indoor space, available on Indoor Reality’s webviewer software.

 The hardware and software behind Indoor Reality’s mapping services.  Image from their website.  

Despite the increased accessibility to interior spaces offered by Indoor Reality’s wearable mapping devices, the mathematics of pinpointing location is complicated by the intricacies inherent in human locomotion.  “If you have a pushcart system, in mathematical terms you are solving a three degrees of freedom problem… but when you have a backpack on you it’s what we call a six degrees of freedom problem… so it becomes much harder,” states Dr. Zakhor.

A diagram that describes the different degrees of freedom.  Image from LeadingOnes

So how does Indoor Reality’s technology resolve the issue of indoor localization and the complexities of the human gait?  The answer lies in the variety of sensors loaded onto the backpack which include accelerometers, barometers, infra-red and fisheye cameras, gyroscopes, magnetometers and more.  Indoor Reality uses these multiple sensors and applies advanced techniques from robotics known as Simultaneous Localization and Mapping (SLAM) to compensate for these natural variations in the motions of walking and accurately locate each position at which data was collected.  The resultant point cloud maps of the interior space have an accuracy of ±10 centimeters and are up to 100 times faster than traditional forms of static scanning.

Indoor Reality’s technology can map specific assets such as lighting in interior spaces. Image retrieved from Indoor Reality.

 Since March 2016, Indoor Reality has mapped over 2 million square feet of indoor space.  Their services have been used by hospitals, universities, public transport, airports, and many other clients to map and model interior spaces for multiple purposes.  The infra-red cameras on the backpack can be used to generate thermal imagery for evaluating energy efficiency within buildings, providing an expedient, cost-effective and accurate way to conduct energy audits.  Using panoramic imagery and depth information collected from the sensors, Indoor Reality’s webviewer allows for virtual walkthroughs with annotation capabilities, creating a platform for collaboration that can be used by architects, engineers and construction workers (AEC).

In an interview with Big Think, Dr. Zakhor discusses the importance and applications of Indoor Mapping.

But beyond the fields of AEC, where can Indoor Reality’s technologies be applied to? With the potential of drone-based package delivery in the near future, high-resolution maps of interior environments will be essential to seamlessly integrate the process between indoor and outdoor spaces.  Indoor Reality’s 3-D models of the interior space can foster drone delivery technology which redistributes goods from the building’s entrance to the proper location within dense, multi-use dwellings.  In the event of a disaster, comprehensive maps of buildings may aid first-responders in saving lives.  “Having a 3D map of a building in the event of a fire or an earthquake is extremely important as it allows emergency responders to plan their attack” states Dr. Zakhor.  The same 3-D models created through Indoor Reality’s technology can also be used by developers in the gaming industry.  Whether this is to create hyper-realistic environments or for creating VR games which adapt to the player’s surroundings.  To further the potentials of Indoor Reality’s technologies, the company currently is working on developing algorithms to automatically detect assets within indoor spaces.  Through their work on modeling and accurately indexing the interior space, Indoor Reality continues to work towards their mission to “Map the World”.  

A video by KQED that describes the diversity of applications for Indoor Reality’s technology.

To learn more about Indoor Reality, visit their website here or follow them on Twitter!

Article written by Glenn Liu

Leave a Reply