Note and works in progress towards my new body of work for 2017
Latest commit ca78b7f Feb 21, 2017 @peteash10 Update
Failed to load latest commit information.

Notes and works in progress towards my new body of work for 2017

In late 2017 I will be exhibiting a new body of work. Between now and then I will be preparing for and creating this work. I will be using this repository to collect, process and refine my ideas and approaches. You are welcome to peruse, share and collaborate with me if you feel the need or desire.

No pages here are complete (unless otherwise stated). There will be typos.

My email is and I'm on Twitter at @peteashton. My art website is

Instructions for Humans

Final draft proposal, Feb 2017


A lot of my work has been about interrogating the black box - closed systems or devices which cannot access the workings of - attempting to reverse engineer them by putting the "wrong" things in and seeing what comes out. (See, for example, Sitting In Stagram, exploring Instagram's compression algorithms by re-posting the same image over and over.) By looking at these unintended results I can start to understand how the intended results are created and reclaim some power over the system.

In this way I build a relationship with the system similar in ways to how one might build a relationship with another person by talking to them and asking them questions which are outside the expected context in order to understand the functions of their mind. The difference between people and closed systems is there's a feedback loop with people. They are learning about you as you are learning about them. Your series of questions, and your response to the answers, will affect how they subsequently answer, for example.

Machine Learning systems are today's black box, producing valuable information for society in a way that society does not understand. Even the people who programme them don't fully understand what's going on inside their massively complex processes, and the outputs are often different. They also "learn", exploring new options and hardening successful ones, based on new information. It is therefore possible to imagine something analogous to a person. An artificial intelligence.

But Machine Learning systems are not people. They are merely complex statistical generators. It is a mistake to see them as building towards some kind of intelligence, let alone consciousness (if there's a distinction).

What is interesting, however, is how we project consciousness on to their results. The varying outputs of ML systems look like intelligence so we project intelligence on to them. But the outputs are mathematical abstractions of representations of the world. The machine is not drawing a face or writing a poem. There is no intention or desire. When we see intention we are making it up in our minds. That's interesting.

(This also applies, in some ways, to the humanisation and personification of animals. I do it all the time with my rabbits. Which is why I'll never understand my rabbits.)

If Art is the practice of creating abstractions of the world in order to better comprehend the world, the Machine Learning systems are ripe for artistic investigation.

Instructions for Humans

Instructions for Humans is the working title for proposed new work by Pete Ashton developed during 2017 for exhibition in November 2017.

The works aim to explore how machine gesture informs the human creative process and in turn how human gesture might inform mechanical representations., By employing recent developments in artificial intelligence and machine learning I will be asking what it means for a computer to "see", how society can be influenced by opinions derived from the perceptions of machines, and how interrogating mechanical systems can help us to question the biases of our own sense-based cognition.

I will be producing the work during the period after Article 50 has been invoked, the first months of Donald Trump's presidency and elections in Europe where populist demagogues hope to prey on anxieties about refugees, migrants and foreign "others". In this climate the work will address concepts of filters, truth, objectivity and the fungibility of facts, looking into how we might develop tactics to understand and deal with this media environment.

The work will balance serious themes with an explorative and educational approach, encouraging audiences to think about these processes and systems in modern society and question the place of cameras and other sensors in an era of massive data processing by governments and corporations. My end goal is to develop work which reveals the man behind the curtain but also dispels the confused fear and despair that often inform discussion of these issues.

The Work

The work will be developed through events and residencies during the summer and culminate as an evolving performative exhibition directed by Machine Learning systems, which are in turn directed by the artist.

Machine Learning systems, commonly called "algorithms", are complex statistical programmes which use vast quantities of data to predict a likely outcome. A simple example is Predictive Text or Autocorrect which notices the phrases you commonly type into your phone and suggests them to you. A more controversial case is predictive policing which uses historical crime data to suggest where police resources would best be deployed. In all cases, the algorithm is "trained" on a corpus of data and all its results are constrained by the quantity and quality of this information.

A significant part of Machine Learning in recent years has involved image analysis and computer vision, for use in areas such as the development of self-driving cars and next-generation surveillance. This has caught the attention of both visual artists and privacy campaigners and will form the basis for my work.


Running on a custom-built Linux computer, my systems will be trained on information coming from a variety of sensors places around the city along with data gathered from internet queries.

Sensors placed around the city (with permission of property owners) will capture video, still images, environmental information, sound, radio signals and other data. Official data sources for weather, traffic, crime, pollution etc will also be captured. Online queries of social media and news sites will concentrate on issues deemed pertinent to the city such as homelessness, immigration and economic development, and will collect text and images.

People will be invited to upload their own photographs relating to the city and during the exhibition there will be an opportunity to do this within the gallery.

All this visual and textual information will be used to train the systems so they can be interrogated for instructions or to generate new works. Speculative ideas for development might be:

  • A portrait booth where data about a person is captured (face structure, cellphone radio waves, body temperature) and an image generated using the corpus of data in the system.
  • A constantly updating 3D environment generated from architectural photographs and data captured in the city.
  • Plans for the fabrication of objects to be displayed in the gallery, along with instructions on how to display them.
  • Directions for the capturing of more photos and information from the city by volunteers, such as routes to walk and objects to find.

All data and images created during the execution of these instructions will be submitted to training corpus creating a feedback loop of human activity and machine interpretation and instruction, generating artworks which constantly evolve, albeit within constraints.

A key output of the algorithm will be physical instructions for humans. I am keen to work with performance artists (Emily Warner and Aleks Wojtulewicz have expressed an interest) who will be asked to interpret the generated output as movement. Data from these performances (visual, biometric, aural, etc) will be fed back into the system. I also want to apply machine learning techniques to the Photo Walk model I have developed for technologically augmented psychogeographic exploration of place, using instructions from the system on where to go and what to capture (images, audio, sketches, transcriptions, etc) and feed back into the machine.

There will be a series of public events prior to and during the exhibition itself, culminating in an online resource and the exhibition will be informed by the concerns, ideas and practices emerging from those events.

I plan to run a symposium-style event for Machine Learning in the Arts, possibly in collaboration with Luba Elliott who runs the Artists and Machine Learning events in London. I would hope to gather local, national and international experts and practitioners, in person and via Skype, with the aim of inspiring future activity in the West Midlands to support my own practice.

The work will be developed by myself in collaboration with others as appropriate. I will use the Fizzpop hackspace in Digbeth for most of my fabrication and hope to collaborate with some of the members there. I will develop and preview visual work through the Black Hole Club at Vivid Projects. Substantial pieces of the work will be developed through residencies at Birmingham Open Media. I also run a low-key monthly artists feedback group at P Cafe in Stirchley.

During the production period I will be in communication with other artists and scientists working in this field as well as attending the Resonate festival in Serbia (and other related events across Europe as funds permit) to network, develop and promote the work.

All work will be documented online with the intention of creating a valuable resource for creation of similar work by others.

Inspirational work by others

Much more to go in here.

In the field of Machine Learning

Sensor Data and Experimental Capture


Date Activity
March Submit G4A
Learn Machine Learning
April Resonate Festival
Hear from ACE
May Start Work
Build Sensors
June Build Robot AI
Test sensors in City
Residency / Workshop at BOM on sensors? (Exp Capt 2)
July Translate AI Outputs
Aug Work with Performers (Residency)
Sept Possible Camera Obscura work
Produce prototype sculptures for exhibition
Oct End Development process
Tie it all together
Nov Exhibition opens
Walks, performances, talks, etc
AI WM Symposium
Jan Documentation online
Evaluation for ACE
Raise profile
Explore touring to other galleries / cities.
Submit talks to conferences.