To demonstrate how machine learning works, one Dutch radio station has trained a neural network on hundreds of drawings and images of one of its reporters. When you draw a face, the program translates your sketch into what’s supposed to be a photorealistic image, based on its database of drawings of reporter Lara Rense. But the results are generally horrifying, mixing fleshy shapes with dark hair to create monstrous images that resemble a distorted human face.
This approach could be used for a lot of different things. One of them could be creating highly sophisticated artworks through AI with just a very simple sketch provided by humans.
Using a pen or paintbrush, this small WiFi-connected robot arm is able to recreate on paper, whatever you draw on a touchscreen device.
…Controlled by an accompanying app, the robot is programmed to mimic the motion of the hand, drawing each line in exactly the same order and copying precisely the drawing style and character.
The robot is able to hold a pen or paintbrush of the user’s choice, while a metal plate enables it to securely sit on a piece of paper, a sketchbook, diary or notebook. It can also be hung on a wall or mounted on a fridge thanks to its magnetic base.
But what if you connect to it an AI? Augmenting art or disrupting it?
Cannes Lions is more known among creative types than technology people, although Google, Facebook and Twitter have been the most prominent brands on the beach for some years. This year, big consultancies like IBM and Accenture claimed the spotlight as technology, platforms, and more recently AI and VR/AR keep raising their profile in the creative sphere.
Among VR wins, Google’s Tilt Brush won two golds, and is sure to be a creativity powerhouse for the virtual world. Audi’s Enter the Sandbox took playful creation to car buyers, with a physical sandbox and a VR cockpit enabling people to drive inside their own creations.
Our primary motivation in studying GANs this semester was to try to apply a GAN-derived model to the generation of novel art. Much of the work in deep learning that has concerned itself with art generation has focused on style, and specifically the style of particular art pieces. In papers such as A Neural Algorithm of Artistic Style, deep learning nets learn to 1) differentiate the style of a piece of art from its content and 2) to apply that style to other content representations. By building off of the GAN model, we hoped to build a deep-net that was capable of not only learning a distribution of the style and content components of many different pieces of art, but was also able to novelly combine these components to create new pieces of art. The task of novel content generation is much more difficult than applying the style from one particular piece of art to the content of another.