Robohub.org
 

Paper Computing technology the first step to paper-based Google Docs


by
30 October 2012



share this:
12-0200-r

As well as using a camera and computer, this system uses a laser and UV light, making it possible to work directly with the hand-drawn sketches using the computer.

So for example, the user can leave only the edges of hand-written characters, creating 3D like text, or draw a figure by hand and color it in automatically.

“This is one technology for truly turning ordinary paper into a display. Until now, it’s been possible to project things onto paper and use it as a screen, or import things drawn on paper to PC by using a digital pen. But the first method uses light, so the results can only be seen in the dark, and with the second method, even if you can import things, you can’t access them on paper from the computer.”

The pen for sketching uses Frixion thermo-sensitive ink, which becomes transparent when heated, and sketches drawn by the Frixion pens are lit from behind by a laser to erase them. The ink can be erased to a high level of accuracy, at intervals of 0.024 mm.

The paper is coated with a photochromic material, which changes color when it absorbs light, and a DMD-driven UV projector with a resolution of 1024 x 768 pixels is used to print the image onto the paper.

“The idea is to do computing on paper. But in the future, we’d like to enable several people to create one document, like with Google Docs, actually using real-world paper while far apart. We’d also like to enhance the rendering that’s possible through collaboration between people and computers. For example, by giving more detailed access than you get by hand, and enabling you to draw large areas at once.”



tags:


DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.
DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.





Related posts :



Radboud chemists are working with companies and robots on the transition from oil-based to bio-based materials

  10 Dec 2025
The search for new materials can be accelerated by using robots and AI models.

Robot Talk Episode 136 – Making driverless vehicles smarter, with Shimon Whiteson

  05 Dec 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Shimon Whiteson from Waymo about machine learning for autonomous vehicles.

Why companies don’t share AV crash data – and how they could

  01 Dec 2025
Researchers have created a roadmap outlining the barriers and opportunities to encourage AV companies to share the data to make AVs safer.

Robot Talk Episode 135 – Robot anatomy and design, with Chapa Sirithunge

  28 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chapa Sirithunge from University of Cambridge about what robots can teach us about human anatomy, and vice versa.

Learning robust controllers that work across many partially observable environments

  27 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

Human-robot interaction design retreat

  25 Nov 2025
Find out more about an event exploring design for human-robot interaction.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence