Robohub.org
 

Reproducing paintings that make an impression

by
02 December 2018



share this:

The RePaint system reproduces paintings by combining two approaches called color-contoning and half-toning, as well as a deep learning model focused on determining how to stack 10 different inks to recreate the specific shades of color.
Image courtesy of the researchers


By Rachel Gordon

The empty frames hanging inside the Isabella Stewart Gardner Museum serve as a tangible reminder of the world’s biggest unsolved art heist. While the original masterpieces may never be recovered, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) might be able to help, with a new system aimed at designing reproductions of paintings.

RePaint uses a combination of 3-D printing and deep learning to authentically recreate favorite paintings — regardless of different lighting conditions or placement. RePaint could be used to remake artwork for a home, protect originals from wear and tear in museums, or even help companies create prints and postcards of historical pieces.

“If you just reproduce the color of a painting as it looks in the gallery, it might look different in your home,” says Changil Kim, one of the authors on a new paper about the system, which will be presented at ACM SIGGRAPH Asia in December. “Our system works under any lighting condition, which shows a far greater color reproduction capability than almost any other previous work.”

To test RePaint, the team reproduced a number of oil paintings created by an artist collaborator. The team found that RePaint was more than four times more accurate than state-of-the-art physical models at creating the exact color shades for different artworks.

At this time the reproductions are only about the size of a business card, due to the time-costly nature of printing. In the future the team expects that more advanced, commercial 3-D printers could help with making larger paintings more efficiently.

While 2-D printers are most commonly used for reproducing paintings, they have a fixed set of just four inks (cyan, magenta, yellow, and black). The researchers, however, found a better way to capture a fuller spectrum of Degas and Dali. They used a special technique they call “color-contoning,” which involves using a 3-D printer and 10 different transparent inks stacked in very thin layers, much like the wafers and chocolate in a Kit-Kat bar. They combined their method with a decades-old technique called half-toning, where an image is created by lots of little colored dots rather than continuous tones. Combining these, the team says, better captured the nuances of the colors.

With a larger color scope to work with, the question of what inks to use for which paintings still remained. Instead of using more laborious physical approaches, the team trained a deep-learning model to predict the optimal stack of different inks. Once the system had a handle on that, they fed in images of paintings and used the model to determine what colors should be used in what particular areas for specific paintings.

Despite the progress so far, the team says they have a few improvements to make before they can whip up a dazzling duplicate of “Starry Night.” For example, mechanical engineer Mike Foshey said they couldn’t completely reproduce certain colors like cobalt blue due to a limited ink library. In the future they plan to expand this library, as well as create a painting-specific algorithm for selecting inks, he says. They also can hope to achieve better detail to account for aspects like surface texture and reflection, so that they can achieve specific effects such as glossy and matte finishes.

“The value of fine art has rapidly increased in recent years, so there’s an increased tendency for it to be locked up in warehouses away from the public eye,” says Foshey. “We’re building the technology to reverse this trend, and to create inexpensive and accurate reproductions that can be enjoyed by all.”

Kim and Foshey worked on the system alongside lead author Liang Shi; MIT professor Wojciech Matusik; former MIT postdoc Vahid Babaei, now Group Leader at Max Planck Institute of Informatics; Princeton University computer science professor Szymon Rusinkiewicz; and former MIT postdoc Pitchaya Sitthi-Amorn, who is now a lecturer at Chulalongkorn University in Bangkok, Thailand.

This work is supported in part by the National Science Foundation.




MIT News





Related posts :



Tesla’s Optimus robot isn’t very impressive – but it may be a sign of better things to come

Musk has now unveiled a prototype of the robot, called Optimus, which he hopes to mass-produce and sell for less than US$20,000 (A$31,000).
04 October 2022, by

Bipedal robot achieves Guinness World Record in 100 metres

Cassie the robot, developed at Oregon State University, records the fastest 100 metres by a bipedal robot.
03 October 2022, by and

Breaking through the mucus barrier

A capsule that tunnels through mucus in the GI tract could be used to orally administer large protein drugs such as insulin.
02 October 2022, by

Women in Tech leadership resources from IMTS 2022

There’ve been quite a few events recently focusing on Women in Robotics, Women in Manufacturing, Women in 3D Printing, in Engineering, and in Tech Leadership. One of the largest tradeshows in the US is IMTS 2022. Here I bring you some resources shared in the curated technical content and leadership sessions.
29 September 2022, by and

MIT engineers build a battery-free, wireless underwater camera

The device could help scientists explore unknown regions of the ocean, track pollution, or monitor the effects of climate change.
27 September 2022, by

How do we control robots on the moon?

In the future, we imagine that teams of robots will explore and develop the surface of nearby planets, moons and asteroids - taking samples, building structures, deploying instruments.
25 September 2022, by , and





©2021 - ROBOTS Association


 












©2021 - ROBOTS Association