Decode Exhibition Points Way to Data-Driven Art

Image may contain Diamond Jewelry Accessories Accessory and Gemstone

<< previous image | next image >>

The cryptic works on display at London’s Decode: Digital Design Sensations exhibition manipulate raw data as a kind of virtual pigment, finding form and fun amid the sensory overload that threatens to overwhelm the 21st-century hive mind.

Several exhibition pieces showcased at Victoria and Albert Museum depend on human presence to produce their full effect. A motion-detecting eyeball, for examples, blinks each time a visitor blinks. In another piece, a video screen enables visitors to “paint” smears of color through the power of their gyrations.

Other installations, on display through April 11, strip-mine data streams from Twitter, translate a day’s worth of flight routes into animated abstract art and hurl text-message fragments onto dozens of tiny display screens. Building on video art experiments that took root in the ’80s at Ars Electronica, Siggraph and other events, Decode contributors experiment with programming languages to toy with questions about man, machine and the data that binds.

Decode is about demystifying the black art or magic of digital while showing that this work can be poetic, emotional and poignant,” show co-curator Shane R.J. Walter told Wired.com in an e-mail interview. Walter, creative director for the OneDotZero digital arts site, said the exhibition pieces “highlight issues in our everyday lives such as the overabundance of information and how we deal with this through data visualization.” The Decode artists, he writes, “use code as a material to work with just as sculptors work with clay.”

In addition to the curated works, the exhibition hosts the Recode project, which invites programmers to repurpose custom software featured in U.K. designer Karsten Schmidt’s animated video (embedded above) as a foundation for their own variations. Dave Price, for example, reconfigured the original programming language to make Eye Like Recode (embedded below). Some of the user-generated videos will be presented as public art in the London Underground subway system.

Wired.com conducted e-mail interviews with 10 artists to gain insight into the uses of software as a creative visual medium. Browse this gallery for a sampling of their thoughts and images from Decode: Digital Design Sensations.

<< previous image | next image >>

Social Collider

What: Social Collider tracks messages on Twitter by user names or topics, then visualizes connections much in the same way as a particle collider draws pictures of subatomic matter. Unpopular posts link to the next item in the stream; hot posts spin off and horizontally link to related users or topics.

How: Originally written as a 100 percent JavaScript web application, the Decode version has been rewritten in Java to interact with Nintendo Wii remotes. Designers used custom code made with the open source tool Processing.

Why: “The Collider acts as a metaphorical instrument which can be used to make visible how memes get created and how they propagate,” says co-creator Karsten Schmidt, aka Toxi. “Ideally, it might catch the zeitgeist at work.”

Who: Schmidt directs London design studio Post Spectacular; Webby Award-winning artist/designer Sascha Pohflepp contributes to the blog We-Make-Money-Not-Art.com and creates content for Google, Nokia Design and T-mobiles Creation.

<< previous image | next image >>

Listening Post

What: Listening Post collects text in real time from thousands of public online chats, then parses, sorts and analyzes this stream to compose fragments of text into organized “scenes” that play out across 231 vacuum-fluorescent displays.

How: Networking, language processing and analysis software is written in Perl. Audio (Max/MSP) is played through an old Kurzweil K2500 sampler, eight speakers and a Yamaha DME-24 to mix and processes the audio. Each vacuum-fluorescent display screen has a custom control circuit board on the back running off a PIC chip, as well as automotive relays that provide the mechanical “click” and fluttering sounds that punctuate the piece.

Why: “We want to know what the ‘sound’ of all the online chat might be like if it were possible to hear it all,” says co-creator Ben Rubin. “How could we synthesize that into an artwork that was both poetic (at least occasionally) and true to its data source?” Rubin says he likes to use VFDs “because they are such an old and reliable display technology — they are actually vacuum tubes, closely related to the Apollo-era ‘nixie tube’ displays that predated LEDs for electronic info readouts. They are still widely used in industrial applications (scales, gas pumps, vending machines and subway turnstiles).”

Who: New York-based media artist Rubin founded EAR Studio after graduating from Massachusetts Institute of Technology Media Lab; Mark Hansen teaches statistics at the University of California at Los Angeles and studies microsensing technologies for the Center for Embedded Sensing.

<< previous image | next image >>

Bit Code

What: Bit Code uses chains to array a finite number of bits into a variety of combinations. The bits appear as black-and-white elements on the individual segments of the string. Each string is coded with the same bit pattern, reminiscent of Morse code. If the strings are moved in parallel, words appear for a certain period of time, seemingly from out of nowhere, then disappear. Letters and words are created out of the chaos because the finite system of chains can express a nearly infinite number of words and meanings. The machine expresses the change of systems that are simultaneously static and dynamic.

How: Cable transport chains are coded black-and-white patterns that are moved by stepper motors on cog wheels so that the position can be changed.

Why: The perceived information causes a short opportunity for pause, a moment of serenity, of clarity before the incessant flow of constellations, motions and changes starts anew.

Who: German artist Julius Popp won the 2003 Robot Choice Award.

<< previous image | next image >>

Flight Patterns

What: This animation video is based on 24 hours of airplane tracking data provided by the Federal Aviation Administration.

How: Every three minutes, locationss of flight routes were transmitted to creator Aaron Koblin, who interpolated those values to create an animation of where every plane was flying.

Why: ”I was interested in seeing the way Flight Patterns kind of slices up the country waking up, going to sleep and moving around,” Koblin says. “You can see the ebb and flow as planes’ routes erupt on the East Coast in the morning, flow over to the West Coast and eventually to Hawaii. When you zoom in, the animation becomes extremely abstract, with these weird hubs spewing out all these particles.”

Who: California artist Koblin earned an MFA from UCLA’s Design: Media Arts program and has shown work at Ars Electronica, Siggraph, OFFF, the Japan Media Arts Festival and New York’s Museum of Modern Art.

<< previous image | next image >>

Sensity

What: Data is visualized in the form of a dynamic real-time artwork globe.

How: Twenty custom-made sensors embedded throughout London measure light, noise, sound, humidity and temperature data, which is then translated into a real-time visualization of the space.

Why: “Literally painting with data, these works open up a discourse about networks and surveillance technologies so that public-domain space is opened out where anyone can view all the data in these networks,” says creator Stanza.

Who: Winner of a NESTA Dreamtime Award, British artist Stanza creates participatory digital works that have been shown at Tate Britain and the Venice Biennale.

<< previous image | next image >>

Dandelion

What: “You use a hair dryer to blow the seeds off the dandelion, which is projected on a large canvas,” says co-creator Lars Jenssen, describing the piece. “After the seeds are blown off, they will reform and you can blow them off again. We also have a speaker in the hair dryer that plays an adaptive hair dryer sound and speakers in front of the screen which play a soundscape that changes depending on where you blow.

How: An infrared camera tracks a beam of light coming from a source in the tip of the hair dryer. Programming languages include Processing for the camera tracking, Unity for the visual expression and Max/MSP for the sound. Open Sound Control sends the values from Processing to Unity and Max/MSP. 3D Studio Max is used for the 3-D modeling.

Why: “Our aim was to get away from the often fast-paced nature of computer graphics to create a calm and almost meditative experience,” says Jenssen. “The intent of the piece was to make a playful and magical experience with a simple and satisfying interaction.”

Who: Danish studio Yoke combines interactive design, programming and audio engineering to create responsive installations. Webby Award-winning Norwegian-British partnership Sennep designs websites, database-driven applications and interactive installations.

<< previous image | next image >>

Light Rain

What: The Light Rain installation looks like a normal video from a distance, but as visitors approach the screen, their shadows are cast into the projected work.

How: Open Frameworks software senses the movements of the audience using two USB webcams and a Mac Mini, with the music and sound effects produced through Max/MSP.

Why: “We often hate raining, but rain and a drop contain something beautiful and delicate, so we intend for the audience to feel it,” says co-creator Tomohiro Nagasaki.

Who: Based in Tokyo, Sendai and Florence, WOW lab creates films, installations and experimental motion-graphic experiments shown at Miyagi Museum of Art, Sendai, Japan, and Korea’s Seoul Design Festival.

<< previous image | next image >>

Opto-Isolator II

What: The sculpture presents a solitary mechatronic blinking eye, at human scale, that responds to the gaze of visitors with a variety of psychosocial eye-contact behaviors. It looks the viewer directly in the eye, then looks away if it is stared at for too long. Also, the eyeball blinks precisely one second after its visitor blinks.

How: Fabricated from motors, cameras and a computer, the Opto-Isolator ‘s mechatronic design and fabrication is by Greg Baltus of Standard Robot Company, Pittsburgh.

Why: Inventor Golan Levin says he wanted to address the “spectatorship” question: “What if artworks could know how we were looking at them? And, given this knowledge, how might they respond to us?”

Who: Massachusetts Institute of Technology Media Lab graduate Levin directs the Studio for Creative Inquiry at Pittsburgh’s Carnegie Mellon University and has exhibited at the Whitney Biennial, the New Museum of Contemporary Art in New York and Ars Electronica Center.

<< previous image | next image >>

Study for a Mirror

What: The mirror is an “immediate and ephemeral contemporary light painting,” says co-creator Hannes Koch of rAndom International. “Looking into the mirror, the subject does not age but vanishes within seconds, leaving us desiring just one more glance, one more glimpse of something that we inherently recognize.”

How: The mirror relies on tracking software devised by London designer Chris O’Shea.

Why: “We want to reinterpret the ‘cold’ nature of digital-based work by adding behavioral qualities to usually inanimate objects,” Koch says. “This provides the viewer with the opportunity to have a more hands-on experience with technology. The ‘painting’ details the true likeness of the inner circus of dreams and torments with this static moment that is captured before slowly fading back to nothingness.”

Who: Stuart Wood and rAndom International co-founders Koch and Florian Ortkrass graduated from Royal College of Art in London and have shown installations that blend science, art and design at Tate Modern and Ars Electronica.

<< previous image | next image >>

Body Paint

What: This interactive installation allows users to paint on a virtual canvas with their bodies through the use of sensors that interpret gestures and dance into ever-changing compositions.

How: Custom software analyzes a live feed from infrared cameras in real time, converting shapes and motions into colors, drips, spirals and brushstrokes. The software was written in C++ using the open source toolkit openFrameworks. Different aspects of the motion — size, speed, acceleration, curvature, distance — all affect the outcome.

Why: “Using the body to perform images and sound is what really drives me to create an interactive experience that is similar to the joy of stomping in a puddle, or deciding to pour a bowl of spaghetti hoops over their head, or spinning round in circles for 10 minutes until you fall over,” says creator Mehmet Atken. “These are simple pleasures that quickly lose their charm as we get older. I’m fascinated by creating fictional environments that re-create these feelings.”

Who: Former videogame designer Akten founded London-based MSA Visuals in 2003 to develop custom software and virtual environments.

See Also: