For this project, which was conceived in a conversation with artist Aaron Gemmill, a selected group of artists have been invited to submit digital documentation or original works and subject them to Google’s reverse image search. The exhibition is then created with a number of resulting images from these searches. There will also be a catalogue for the exhibition, which will include several short texts by theorists, philosophers, critics, artists and curators about machinic vision and the inhuman future of the image.
Curatorial Assistant: Manuel Correa
Artists: Abbas Akhavn, Kristen Alvanson, Julieta Aranda, Diann Bauer, Amanda Beech, Kevin Bright, Eli Bornowsky, Christine Corlett, Ignacio Corral, Manuel Correa, Jake Dibeler, Samuel Forsythe, Aaron Gemmill, Johann Groebner, Antonia Hirsch, Juliet Jacobson, Gareth James, Khaled Jarrar, Philip King, Atila Richard Lukacs, Tom McGlynn, Christina McPhee, Mani Nilchiani, Rebecca Norton, Rebecca Quaytman, Raha Raissnia, Patricia Reed, Brian Rogers, Nooshin Rostami, Nicolas Sassoon, Charles Stankievech, Joni Spigler, Keith Tilford, Paul Wong
Co-sponsored by The New Centre for Research & Practice and Emily Carr University
Search engines were originally built for generating aggregate indexes of the Internet’s content. But as technology companies continue to exceedingly extract value from the accumulating records of search results on their servers, several developments are beginning to shed light on the future significance of this technology. Today, near infinite storage coupled with faster processors, stronger networks and more powerful algorithms are helping search engines to expand the horizon of understanding for machines beyond the initial human expectations. By constantly responding to the cognitive demands of a world-wide collectivity of humans, search engines are steadily self-enhancing their own mechanisms. They are learning to not only better respond to users but to understand and speak to machines. As a non-human tool for the non-human interpretation of information, search engines stand the chance to be one of the most viable technologies from which complex forms of artificial intelligence might eventually emerge.
With the introduction of reverse image search which enables users to submit images instead of words as their search criteria, Google has positioned itself as the first large technology company to introduce machinic vision into the users' online search options, facilitating the emergence of a non-optical auxiliary for humans’ optical cognition of the world. It is safe to predict that, “Query by image content” methods (QBIC), will gradually synthesize with and replace index-based image recognition and that machines will soon assume a greater share of the responsibility of recognizing and contextualizing images. Thus, it is not hard to see why reverse image search is becoming an area of technological development that anticipates how future machines will approach the task of visualizing the human dominated world and therefore reconstitute it as a habitat for non-humans.
“For Machine Use Only” is a curatorial experiment based on the idea of art making by humans “for”, and “in collaboration with”, machines. As the unstoppable proliferation of images pushes us closer to the outsourcing of the interpretation of images to intelligent machines, audience considerations will sooner or later have to move beyond humans and must include non-human cognitive demands. Much like scripts, emoticons and hashtags, images are in the process of becoming a common language for machine-human and machine-machine communication.
In an ever-growing number of contemporary art exhibitions, the curatorial and display strategies are already being geared more towards the show's photographic documentation than its first hand biophenomenological experience by humans. Similarly, if not more pointedly, “For machines Use Only” tasks the participating artists with thinking rigorously about how their artistic production is seen and understood as flat surfaces by machines.