When observers interact with an object, they typically engage in a complex set of behaviors that include active manipulation and dynamic viewing. These behaviors enable observers to see how the object looks from different perspectives and under different lighting conditions. This provides rich visual information about the object’s shape and material properties. Modern computer graphics methods provide the ability to render realistic images of complex objects at interactive rates. This has enabled great advances in fields such as digital cinema, computer gaming, medical imaging, and cultural heritage archiving. However, in standard graphics systems, the observer is always one step removed from the object. That is, images of the object are typically viewed on a screen, and interaction is usually mediated through interface devices such as a keyboard or mouse.
We are developing tangible imaging systems1–4 that enable natural interaction with virtual objects. Tangible imaging systems are based on modern mobile devices—such as laptop computers and tablets—that typically include touch screens, digital cameras, accelerometers, gyroscopes, and graphics hardware, in addition to traditional displays and input devices. We have developed custom software that takes advantage of these components to allow the orientation of a device and the position of the observer to be tracked in real time. Using this information, realistic images of 3D objects with complex shapes and material properties are rendered to the screen, and tilting or moving in front of the device produces realistic changes in surface appearance. Tangible imaging systems thus enable virtual objects to be observed and manipulated as naturally as real ones with the added benefit that object properties can be modified under the control of the user.
- SPIE Names 69 New Fellows of the Society in 2013 (prweb.com)
- SPIE Photonics West, February in San Francisco: QMC to Present (genesisnanotech.wordpress.com)