Microsoft’s `MirageTable` brings augmented reality to life

The MirageTable was demonstrated at a conference in Austin, Texas and is outlined on the organisation’s research site.

London: Microsoft has come up with an augmented reality system that allows users at different locations to work together on tabletop activities, sharing objects, which they can both handle.

The MirageTable was demonstrated at a conference in Austin, Texas and is outlined on the organisation’s research site.

Researchers said it could “fool” the eye to suggest both parties were using a “seamless 3D shared task space”, a channel reported.

The team has maintained that more work was required before the system could be marketed.

The Mirage Table uses a 3D-video projector to beam images onto a sheet of curved white plastic placed in front of the user.

At each end one of Microsoft’s Kinect depth camera sensors is used to track the direction of each person’s gaze and also to capture the shape and appearance of objects placed on the surface and the participant sitting behind them.

Users are also asked to wear shutter glasses in order to see the projected image in three dimensions. Two computers linked by a network connection are needed to power the experience.

The researchers insisted that they were “motivated by a simple idea: can we enable the user to interact with 3D digital objects alongside real objects in the same physically realistic way and without wearing any additional trackers, gloves or gear.”

They asserted that the success stating that the experience was a significant improvement on current video conferencing technologies.

“In our system, the user can hold a virtual object, move it, or knock it down, since all virtual and real objects participate in a real-world physics simulation... The unique benefit of this setup is that two users share not only the 3D image of each other, but also the tabletop task space in front of them.”

In a video posted online, two people can be seen working at different locations to build an object out of blocks, with one researcher measuring the distance between the pieces placed by the other participant.

A research paper also mentioned that the technology could be used to create a single-person gaming experience.

However, the researchers confessed that the project was far from perfect.

Currently, the Kinect device only captures the front face of objects, leaving gaps and imperfect texturing. The technicians suggested that this could be fixed by using additional cameras.

The set-up also only lets users to scoop or catch objects from below in order to hold them in their hands.

“Simulating realistic grasping behaviours given depth camera input remains an open research problem,” the researchers admitted.

“While we are still very far from an implementation of a working version of Star Trek``s Holodeck, MirageTable shows the potential of the projector/depth camera system to simulate such scenarios,” they added.