Robohub.org
 

See and feel virtual water with this immersive crossmodal perception system from Solidray


by
07 December 2012



share this:
12-0220-r

Solidray, which is involved in virtual reality production, has released an immersive crossmodal system incorporating visual and tactile feedback, enabling the user to see and feel flowing water in a virtual space.

“When you put on the 3D glasses, the scene appears to be coming towards you. You’re looking at a virtual world created in the computer. The most important thing is, things appear life-sized, so the female character appears life-sized before the user’s eyes. So, it looks as if she is really in front of you. Also, water is flowing out of the 3D scene. When the user takes a cup, and places it against the water, vibration is transmitted to the cup, making it feel as if water is pouring into the cup.”

The glasses have a magnetic sensor, which precisely measures the user’s line of sight in 3D. This enables the system to dynamically change the viewpoint in 3D, in line with the viewing position, so the user can look into the scene from all directions.

The tactile element uses the TECHTILE toolkit, a haptic recording and playback tool developed by a research group at Keio University. The sensation of water being poured is recorded using a microphone in advance, and when the position of the cup overlaps the parabolic line of the water, the sensation is reproduced. The position of the cup is measured using an infrared camera.

“Here, we’ve added tactile as well as visual sensations. Taking things that far makes other sensations arise in the brain. You can really feel that you’ve gone into a virtual space. All we’re doing is making the cup vibrate, but some users even say it feels cold or heavy.”

“We’re researching how to make users feel sensations that aren’t being delivered. We’d like to use that in promotions. For example, this system uses a cute character. Cute characters are said to be two-dimensional, but they can become three-dimensional. We think it’s more fun to look at a life-sized character than a little figure. So, we think business utilizing that may emerge.”



tags:


DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.
DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.





Related posts :



#ICML2025 outstanding position paper: Interview with Jaeho Kim on addressing the problems with conference reviewing

  15 Sep 2025
Jaeho argues that the AI conference peer review crisis demands author feedback and reviewer rewards.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Robots to the rescue: miniature robots offer new hope for search and rescue operations

  09 Sep 2025
Small two-wheeled robots, equipped with high-tech sensors, will help to find survivors faster in the aftermath of disasters.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.

Researchers are teaching robots to walk on Mars from the sand of New Mexico

  02 Sep 2025
Researchers are closer to equipping a dog-like robot to conduct science on the surface of Mars

Engineering fantasy into reality

  26 Aug 2025
PhD student Erik Ballesteros is building “Doc Ock” arms for future astronauts.

RoboCup@Work League: Interview with Christoph Steup

and   22 Aug 2025
Find out more about the RoboCup League focussed on industrial production systems.

Interview with Haimin Hu: Game-theoretic integration of safety, interaction and learning for human-centered autonomy

and   21 Aug 2025
Hear from Haimin in the latest in our series featuring the 2025 AAAI / ACM SIGAI Doctoral Consortium participants.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence