Most 3D-printed objects currently come in a variety of exciting monochromatic colors. You've got red, white, blue and, if you're really lucky, a mix of two colors. And while the objects may have a great deal of detail, the surface will be bland.
Now, researchers have found a way to precisely affix complex coloring to objects, making them look somewhat photo-real (and perhaps a bit creepy).
The method, developed by a pair of teams at Zhejiang University and Columbia University, is called computational hydrographic printing. Hydrographic printing is not new; mass manufacturers use it to add repeating patterns to cheaply manufactured objects. It involves a vat of water and thin, pre-printed film, which is sprayed with a softening agent. The object is lowered into water and presses against the film, which stretches to wrap itself around every contour of the object. When the object is removed, it looks as if the pattern was part of the 3D-printing process.
According to researchers, it's nearly impossible to precisely match the object with the film, hence why they usually use repeating patterns — so no one would notice if anything is out of alignment.
The researchers realized that if they could use the 3D scan of the original object and pre-dip it into a simulated bucket of water and thin-printed film, they could figure out how to position and align more complex prints with objects. However, that simulation can't be run without first using a modified hydrographic printing rig, which is comprised of a vertical, motorized aluminum arm and gripper that moves up and down at 5mm per second, along with a Microsoft Kinect 3D image mapping device.
研究者认识到，如果能用原始对象的3D扫描件，然后事先浸入一个模拟的水箱和薄型胶片，那么他们便可以知道如何把复杂的图案贴合物体。但是，这种模拟如果不改良现有的水纹打印装置，是无法运行的。改良后的水纹打印装置的构成是这样的：有一只垂直的铝制电动臂，还要有一个每秒上下活动5毫米的爪子，同事还要搭配Microsoft Kinect 3D成像设备。
The Kinect's role is crucial. Once researchers have put the object in the gripper, it measures its exact position and orientation in relation to where the 3D film will be placed. It's not until that information has been collected that the researchers can print the film on a standard inkjet printer, which will then include the necessary image deformations and account for the image stretch that will occur during the dip — ones that will make the final melded object look just right.
In a video announcing the breakthrough, the Zhejiang University and Columbia University teams show off a printed tiger mask, zebra, single dip-globe and a 3D-printed cat. For the cat, the team performed three dips to wrap the initially monochrome kitten in a photo-realistic skin. The result has that uncanny valley problem of looking almost real, but not quite.
There are some limitations. Researchers note that they cannot easily cover objects featuring highly concave surfaces or any surface that might be hidden from view or scanning. Color blending and precision is also a bit of a challenge, and when the film stretches out significantly, some color can become lighter than the original color.
Even with those limitations, this relatively simple method for adding complex patterns and photo-real imagery to monochromatic 3D-printed objects shows promise. One can imagine someone scanning and printing a lifelike 3D model of a person's head, then using a high-res photograph to print out a face and sides, top and back of the head. Multiple computationally guided dips would then create a truly scary bust.