Author: Alessandro

Jeff Donaldson creates real textiles to represent digital glitch art

From: Data Weave by Jeff Donaldson — Kickstarter

Data Weave is a marriage of art forms to the extent that the Jacquard loom’s use of punch cards to weave intricate motifs inspired the use of punch cards for saving and executing programs in early computing.

Data Weave extends traditions of embedding symbols in textiles to communicate information by applying my practice of color coding binaries to weaving. This process of encoding data with color produces intricately detailed, cascading motifs that are meant to be woven pixel to stitch.

Each pixel represents bits of data showing how weaving can also be understood as pixel art. Furthermore, Data Weave simultaneously illustrates an alternate way of data preservation and a materialization of digital ephemera by tangibly elucidating data structures with color.

Alexander Reben trains AI to learn the unique characteristics of human voices and turn them into artworks

From: deeply artificial trees | Alexander Reben

This artwork represents what it would be like for an AI to watch Bob Ross on LSD (once someone invents digital drugs). It shows some of the unreasonable effectiveness and strange inner workings of deep learning systems. The unique characteristics of the human voice are learned and generated as well as hallucinations of a system trying to find images which are not there.

Original video removed. Watch it here: https://vimeo.com/212669648

Aparna Rao augments ordinary objects with electronics turning them into art

From: Aparna Rao: High-tech art (with a sense of humor) | TED Talk | TED.com

Artist and TED Fellow Aparna Rao re-imagines the familiar in surprising, often humorous ways. With her collaborator Soren Pors, Rao creates high-tech art installations — a typewriter that sends emails, a camera that tracks you through the room only to make you invisible on screen — that put a playful spin on ordinary objects and interactions.

Quite old but couldn’t miss for the launch of Arts High Tech.

Neural Network Learns How To Apply Photos To Any Drawing

From: Don’t Blame Us If You Waste Your Day With This Neural Network Drawing Tool

To demonstrate how machine learning works, one Dutch radio station has trained a neural network on hundreds of drawings and images of one of its reporters. When you draw a face, the program translates your sketch into what’s supposed to be a photorealistic image, based on its database of drawings of reporter Lara Rense. But the results are generally horrifying, mixing fleshy shapes with dark hair to create monstrous images that resemble a distorted human face.

This approach could be used for a lot of different things. One of them could be creating highly sophisticated artworks through AI with just a very simple sketch provided by humans.

Ian Cheng exposes a video game that plays itself at the MoMA and on Twitch

From: The Museum of Modern Art is currently streaming a game that plays itself on Twitch – The Verge

The Museum of Modern Art in New York City is hosting the first US-based solo exhibit from artist Ian Cheng: a live simulation known as the “Emissary trilogy.” In addition to the physical installation, the museum is also streaming “unique versions that exist online only” via Twitch.

Twitch is best known as a platform widely used by the gaming community, and the partnership with MoMA reflects that. The Emissary trilogy creates its simulation through the use of a video game engine; Cheng describes it as “a video game that plays itself.” The exhibit is an exploration of the human consciousness and evolution.

Google offers contextual information and high-res images for artworks appearing in Search and Street View

From: Searching for art just got better. Where will you start?

Now when you search an artist like Gustav Klimt, you’ll see an interactive Knowledge Panel that will highlight ways you can explore on a deeper level, like seeing a collection of the artist’s works or even scrolling through the museums where you can view the paintings on the wall. And for some pieces, you can click through to see picture-perfect high-resolution imagery right from Google Arts & Culture.

and

Now as you walk through the rooms of the museums on Google Maps you’ll see clear and useful annotations on the wall next to each piece. Clicking on these annotations will bring you to a new page with more information provided by hundreds of the world’s renowned museums. You’ll also be able to zoom into high-resolution imagery—getting you closer to these iconic works than you ever thought possible.

To create this feature, we put our visual recognition software to work. Similar to how machine learning technology in Google Photos allows you to search for things in your gallery, this software scanned the walls of participating museums all over the world, identifying and categorizing more than 15,000 works.

What Google is doing for the art world is unprecedented, astonishing.

Google partners with Rhizome to preserve digital art

From: Preserving digital art: How will it survive?

…while the cave paintings in Lascaux are an incredible 20,000 years old, it isn’t clear whether digitized images of that art—or any digital art created today—will last 20 years, let alone 20,000.

That’s because digital art requires readers and, often, software in order to to be viewed, heard or experienced. And as software, browsers, and files either update versions or become obsolete, both digital art—art produced by means of computers and software—and digitized art—reproduced or copied art, rendered in digital form from original physical media—are at risk of disappearing.

and

It’s with this in mind that Google Arts & Culture has partnered with Rhizome to help in the preservation of digital art. Rhizome grew out of the blossoming web-artist community of the mid-1990s, and is now a thriving nonprofit in New York City. They’ve developed unique tools which preserve digital artworks and allow them to viewed long after their complex, software foundations have become obsolete.

Mat Collishaw recreates a photography exhibition of 1839 with VR

From: Mat Collishaw: Thresholds | Somerset House

Using the latest in VR technology, Thresholds restaged one of the earliest exhibitions of photography in 1839, when British scientist William Henry Fox Talbot first presented his photographic prints to the public at King Edward’s School, Birmingham.

The experience was a fully immersive portal to the past; people were able to walk freely throughout a digitally reconstructed room, and touch the bespoke vitrines, fixtures and mouldings; even the heat from a coal fire was recreated. A soundscape for Thresholds included the sound of demonstrations of the Chartist protesters who rioted in 1839 on the streets of Birmingham, and could be glimpsed through the digital windows.

Google AI is learning that most humans represent ideas in same way

From: Google’s AI Proves That Your Drawings Look Like Everyone Else’s

In November 2016, Google released a cute little game called Quick, Draw! on its AI Experiments website, where it showcases fun or unusual AI experiments for consumers. Quick, Draw! challenged you to draw–in 20 seconds or less–items ranging from tennis rackets and wine glasses to yoga and the Mona Lisa, all for the purpose of advancing machine learning research.

Since then, 15 million people have generated 50 million drawings–what Google is calling “the world’s largest doodling data set”–that are now available for researchers, artists, and designers to use in training algorithms to do things like distinguish a scribble of a boomerang from a doodle of an elbow.

…certain objects seem to share certain unalienable details.

An image of an old-fashioned, antennaed TV is far more visually dynamic and easier to understand than a drawing of the flat, nondescript boxes that serve as televisions in many households today.

Maybe machine learning algorithms will only learn to recognize drawings of 1950s televisions as a result.

If AIs learn from us, and the large majority of us represent ideas in the same way, what will happen to creativity when AI will start drawing?