Robohub.org
 

See and feel virtual water with this immersive crossmodal perception system from Solidray


by
07 December 2012



share this:
12-0220-r

Solidray, which is involved in virtual reality production, has released an immersive crossmodal system incorporating visual and tactile feedback, enabling the user to see and feel flowing water in a virtual space.

“When you put on the 3D glasses, the scene appears to be coming towards you. You’re looking at a virtual world created in the computer. The most important thing is, things appear life-sized, so the female character appears life-sized before the user’s eyes. So, it looks as if she is really in front of you. Also, water is flowing out of the 3D scene. When the user takes a cup, and places it against the water, vibration is transmitted to the cup, making it feel as if water is pouring into the cup.”

The glasses have a magnetic sensor, which precisely measures the user’s line of sight in 3D. This enables the system to dynamically change the viewpoint in 3D, in line with the viewing position, so the user can look into the scene from all directions.

The tactile element uses the TECHTILE toolkit, a haptic recording and playback tool developed by a research group at Keio University. The sensation of water being poured is recorded using a microphone in advance, and when the position of the cup overlaps the parabolic line of the water, the sensation is reproduced. The position of the cup is measured using an infrared camera.

“Here, we’ve added tactile as well as visual sensations. Taking things that far makes other sensations arise in the brain. You can really feel that you’ve gone into a virtual space. All we’re doing is making the cup vibrate, but some users even say it feels cold or heavy.”

“We’re researching how to make users feel sensations that aren’t being delivered. We’d like to use that in promotions. For example, this system uses a cute character. Cute characters are said to be two-dimensional, but they can become three-dimensional. We think it’s more fun to look at a life-sized character than a little figure. So, we think business utilizing that may emerge.”



tags:


DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.
DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.





Related posts :



Using generative AI to diversify virtual training grounds for robots

  24 Oct 2025
New tool from MIT CSAIL creates realistic virtual kitchens and living rooms where simulated robots can interact with models of real-world objects, scaling up training data for robot foundation models.

Robot Talk Episode 130 – Robots learning from humans, with Chad Jenkins

  24 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chad Jenkins from University of Michigan about how robots can learn from people and assist us in our daily lives.

Robot Talk at the Smart City Robotics Competition

  22 Oct 2025
In a special bonus episode of the podcast, Claire chatted to competitors, exhibitors, and attendees at the Smart City Robotics Competition in Milton Keynes.

Robot Talk Episode 129 – Automating museum experiments, with Yuen Ting Chan

  17 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Yuen Ting Chan from Natural History Museum about using robots to automate molecular biology experiments.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.

From sea to space, this robot is on a roll

  13 Oct 2025
Graduate students in the aptly named "RAD Lab" are working to improve RoboBall, the robot in an airbag.

Robot Talk Episode 128 – Making microrobots move, with Ali K. Hoshiar

  10 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Ali K. Hoshiar from University of Essex about how microrobots move and work together.

Interview with Zahra Ghorrati: developing frameworks for human activity recognition using wearable sensors

and   08 Oct 2025
Zahra tells us more about her research on wearable technology.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence