Robohub.org
 

Paper Computing technology the first step to paper-based Google Docs


by
30 October 2012



share this:
12-0200-r

As well as using a camera and computer, this system uses a laser and UV light, making it possible to work directly with the hand-drawn sketches using the computer.

So for example, the user can leave only the edges of hand-written characters, creating 3D like text, or draw a figure by hand and color it in automatically.

“This is one technology for truly turning ordinary paper into a display. Until now, it’s been possible to project things onto paper and use it as a screen, or import things drawn on paper to PC by using a digital pen. But the first method uses light, so the results can only be seen in the dark, and with the second method, even if you can import things, you can’t access them on paper from the computer.”

The pen for sketching uses Frixion thermo-sensitive ink, which becomes transparent when heated, and sketches drawn by the Frixion pens are lit from behind by a laser to erase them. The ink can be erased to a high level of accuracy, at intervals of 0.024 mm.

The paper is coated with a photochromic material, which changes color when it absorbs light, and a DMD-driven UV projector with a resolution of 1024 x 768 pixels is used to print the image onto the paper.

“The idea is to do computing on paper. But in the future, we’d like to enable several people to create one document, like with Google Docs, actually using real-world paper while far apart. We’d also like to enhance the rendering that’s possible through collaboration between people and computers. For example, by giving more detailed access than you get by hand, and enabling you to draw large areas at once.”



tags:


DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.
DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.





Related posts :



Robot Talk Episode 134 – Robotics as a hobby, with Kevin McAleer

  21 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Kevin McAleer from kevsrobots about how to get started building robots at home.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Robot Talk Episode 133 – Creating sociable robot collaborators, with Heather Knight

  14 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Heather Knight from Oregon State University about applying methods from the performing arts to robotics.

CoRL2025 – RobustDexGrasp: dexterous robot hand grasping of nearly any object

  11 Nov 2025
A new reinforcement learning framework enables dexterous robot hands to grasp diverse objects with human-like robustness and adaptability—using only a single camera.

Robot Talk Episode 132 – Collaborating with industrial robots, with Anthony Jules

  07 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Anthony Jules from Robust.AI about their autonomous warehouse robots that work alongside humans.

Teaching robots to map large environments

  05 Nov 2025
A new approach could help a search-and-rescue robot navigate an unpredictable environment by rapidly generating an accurate map of its surroundings.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence