Art & Science
The basis of the „European Digital Art and Science Network“ is a big manifold network consisting of scientific mentoring institutions (ESA, CERN, ESO and Fraunhofer MEVIS), the Ars Electronica Futurelab and seven European cultural partners (Center for the promotion of science, RS – DIG Gallery, SK – Zaragoza City of Knowledge Foundation, ES – Kapelica Gallery / Kersnikova, SI – GV Art, UK – Laboral, ES – Science Gallery, IE. The EU funded project lasted from 2014 to 2017. The Online Archive of Ars Electronica provides an overview of the individual activities of the network and also delivers information about the network itself, the residency artists and the involved project partners and the jury.
SEEING Exhibition at Science Gallery Dublin
SEEING—What are you looking at? A free exhibition questioning how eyes, brains, and robots see Science Gallery at Trinity College Dublin 24.06.—25.09.2016 • Info: Exhibition in the context of the European Art & Science Network
As the model in a life drawing class, the human is personality-less, an object of study. The human sitter is passive, the robots taking what is perceived as the artistic role. Visitors to Science Gallery Dublin could sit for the robots and then receive a digital version of their portraits. Does a computer see you the way you see yourself?
from coarse and geometry-driven in the beginning to more specific and detail-oriented in the end. At this point, distinctive patterns, areas, and objects that “excite” the computer vision system can be identified. The title of the work refers to the measurement of perfect human vision—20/20—contrasted with an as-yet unquantifiable measure of seeing for a computer vision system, represented by the variable “X”. Visitors are invited to experience the process of seeing through a complex neural-network-based computer vision system to determine the value of “X” for themselves.
Thus, the subject and the observer are embodied as one in the piece. The piece recounts a dialogue between the person as the observer and the person as the subject. Standing directly in front of the composition, all the levels merge and the pieces can be viewed as one image. Only by changing the angle of observation can the viewer distinguish individual layers and the dialogues between the different viewing processes.
Lucida III is supported by the Wellcome Trust Small Arts Awards and Arts Council England.
Experience your colour perception in a poetic way
You step into a room. The walls are made up of coloured stripes. Above you, red, green and blue lights cycle through the spectrum of different colours. As the lighting changes, the walls around you seem to throb and move. Magical Colour Space looks at the basics of colour perception. When light hits an object, the object absorbs some of the light wavelengths and reflects the rest. The human eye and brain work together to translate this reflected light into colour. As the light in Magical Colour Space slowly changes its proportions of red, green and blue, the coloured stripes on the wall reflect only the wavelengths in their own colour. Because the brain uses changes in light levels to help detect motion, this creates the illusion that the walls are moving.
An expanded cinema dialogue between strangers
Mirror II – Distance examines the distances between individuals who occupy, protect, and work in worlds that they don’t really belong to. The Diplomatic Enclave in Islamabad is a heavily gated expat community in the capital city of Pakistan. This enclave is cut off from the rest of the country by high walls and heavy security. Inside the enclave is a network of country and organizational compounds further barricaded from each other. Entry into the enclave and entry into the various demarcated territories inside is monitored by local Pakistani guards. These men are privy to the culture, conversations, and experiences of the international communities that they are responsible for protecting. In this piece, two Pakistani guards stand watch over the expat compounds. They observe each other from a distance as they listen to the visitors, experts, and specialists discuss Pakistan, its people, and its future. Using a cable mounted camera system, both forward and rear views are filmed simultaneously. This piece uses an experimental filming format called “collimation,” which manipulates perception to provide an illusion of depth. This installation is part of the Mirror project, a series of two-screen works devised to provide insight into global communities that experience distancing and objectification.
Documentaries of collaborative performances
Mobility Device is a collaborative performance in which Carmen Papalia is accompanied by a marching band to replace his white cane as his primary means of gathering information about his surroundings. As a piece of music, Mobility Device is an extension of the musicality of the white cane—fixtures such as curbs, lampposts, and sandwich boards become notes in the soundscape of a place. Mobility Device proposes the possibility of user-generated, creative process-based systems of access. It represents a non-institutional (and non-institutionalizing) temporary solution for the problem of the white cane. On June 1st 2013, Carmen performed a site-specific rendition of Mobility Device, with accompaniment by the Great Centurion Marching Band from Century High School, at Grand Central Art Center in Santa Ana, California.
White Cane Amplified documents the experiential research that Carmen conducted in preparation for a visit to the Franklin W. Olin College of Engineering in Massachusetts, where he is currently producing an acoustic mobility device in collaboration with students in Sara Hendren’s “Investigating Normal” class. The narrative depicts Carmen speaking into a bullhorn as he attempts to perform the social function of the white cane while maintaining his agency, finding support and communicating his nuanced and emergent needs.
A simple hole betrays your eyes
In the near future, it’s possible that we will use our eyes not only to take in information but also to deliver information. What if our gaze was monitored by someone else? How would we feel and how would this affect our communication? Peeping Hole is an interactive installation that tracks a viewer’s gaze and reveals what they are staring at to an audience, without the viewer noticing. Though a small hole in the exhibit, visitors will gaze at an image. The audiences around the viewer can see what they are staring at, thanks to eye tracking technology. The viewer may not notice their audience and what they can see until the next visitor steps up to view the exhibit. Peeping Hole is a playful look at vision monitoring and privacy, but will this technology become ubiquitous in the years to come, and how will it be used?
Deforming reality to fit the screen
We are increasingly living our lives through filters. Through social networks, through smartphones, through the coil of fiber and unseen airborne signals. In the communication age, we very often speak to the ones closest to us through digital means. The screen is no longer a window to somewhere else; it is, instead, the here and now, while our physical surroundings are slowly becoming the “other world.” We’re closer, yet further apart, than ever. Screen Mutations explores the growing role of video communication applications—such as Skype and Facetime—in blurring the line between the physical and digital world. It imagines a speculative future where our physical reality is deformed to be viewed through a camera. This is achieved by designing a set of props—cups, teapots, utensils—that look deformed off-screen, while on-screen they look “normal” due to optical illusions achieved by the geometric distortion of a 3D object. Thus, the point of view of the webcam becomes the main design tool. The result is like a reversal of a Salvador Dali painting: the objects have surrealistic and impractical shapes in the tangible world, while the image as it appears digitally seems to suggest otherwise.
Video about human connection
Seen/Unseen is a video projection that draws from human connection, amplifying the effects of the unconscious reactions we experience while engaging with others. It stems from a desire to visualize the gaze and to make visible the invisible sight lines between individuals. In the piece, an oblong frame acts as a peephole or pupil that reveals a view of suspended threads that span across the frame. At first, the piece seems very abstract; the viewers are left to witness the curious movements of these hanging strings. Although the mechanisms that create the movements are not obvious at first, the delicate lines appear to be alive. Within the last few seconds of the video, a slow zoom reveals the source of the movement to the viewer. The use of the hair acts as a physical extension of the body and amplifies the effects of the unconscious reactions we experience while engaging with those to whom we feel closest.
Exploring human echolocation
Daniel Kish’s eyes were surgically removed before he was thirteen months old, to save him from an aggressive form of cancer. As Daniel grew up, he taught himself to see the world around him using echolocation. Daniel makes clicking noises with his tongue to understand his environment, navigating his surroundings by listening to the echoes as his clicks bounce off surfaces. Seeing is not a metaphor for Daniel. He uses the same part of his brain—the visual cortex—to picture his surroundings as people with eyes do. It’s just that the information comes in a different medium. It’s sight without light. This exhibit aims to give visitors a little glimpse into the world of sight without light by demonstrating one form of echolocation—seeing an object move closer to them by listening to the reflected sound of their own voice.
Interactive installation inspired by synesthetic experiences
Has a sound ever reminded you of a shape, colour or taste? The term “synesthesia” is formed from the fusion of the Ancient Greek words for “together” and “sensation”. Synesthesia is a rare neurological condition in which different sensations perceived by different senses are mixed up. In one of the most common forms of synesthesia, letters or numbers are perceived as colored. A synesthetic person may have the capacity to “hear” colour, “see” music, or even perceive different taste sensations by touching objects with certain textures.
When they describe their experience, synesthetes often talk about visual shapes on a “screen” located in front of their faces. Through the use of new technologies, this project aims to bring you closer to an audiovisual synesthetic experience. Using colored shapes, a camera, a screen, and a programming tool, participants can assemble a sequence of colors and a computer will transform it into an audio-visual experience. This merging of senses evokes the experience of synesthesia.
Get an insight into how a computer “thinks” and “sees”
Machines can’t do what the human imagination can... yet. This installation researches the failure of machines and computers to simulate the human mind. A touchscreen allows the visitor to navigate through and explore a deep neural network. In machines, an artificial neural network is a computer algorithm inspired by the central nervous systems of animals. The webcam analyses in realtime what it sees and what it has been “taught” to detect. What is detected is visualized as highlighted artificial neurons. The audience can then browse through all the neural layers and get an insight into how a computer “thinks” and “sees”. A voice tells visitors which layer they are looking at and what’s happening. In a lot of cases, the visitor may not recognize these images, but the artificial intelligence appears to, demonstrating the limits of machine comprehension. This work demonstrates how AI and deep neural networks are easily fooled, a dystopian thought when you take into account the fact that they are already used by the military, drones, and Tesla’s self-driving cars. How much confidence do we have in ourselves and the technologies we develop? Or in societies and industries that are accelerating the development of AI and automatization?
Artistic investigation of face-tracking algorithms
Computer vision relies on algorithms to make sense of the world. Unseen Portraits investigates what face recognition algorithms consider to be a human face. How much do you have to deform someone’s features to make them invisible to a machine? Portrait photos of visitors are distorted on a screen. A surveillance camera films the distortion and uses facial recognition software to scan the camera footage for faces while the image becomes more and more obscured. The moment the photo becomes too warped and the face can’t be recognized by the algorithm anymore, the software takes a screenshot. The visitor is now invisible to computer vision. Despite its subject matter, Unseen Portraits isn’t a conceptual investigation of the algorithms used. Rather, the project uses computer vision software as an artistic tool, creating images reminiscent of Francis Bacon’s self-portraits from the 1970s. It isn’t so much a mechanism to hide from the software as it is a way to capture the software’s flaws in a work of art.