A foul-mouthed, self-injuring bot Jérémie Wenger, April 2018
IS71014B: Workshops in Creative Coding, term 2 Theo Papatheodorou, Goldsmiths College, University of London
$ node wordster.js, at which point it should be able to communicate with ofxOffenceMechanism through OSC.
Offence Mechanism uses various lists of words (a list of banned terms from a Christian website, Wikipedia, The Online Slang Dictionary) as well as my own texts to craft a rather crude, primitive Twitter bot. The main purpose of the bot is to interact with itself, treating its own posts on a par with any other, thus creating a picture of absurd self-harm and psychotic/schizophrenic switches between vulgar, aggressive, casual or apologetic tones.
The main activity of the bot is to post random words from the main list of banned terms, which is complemented by its interactive functions as well as the ability to retrieve information from Urban Dictionary. The most likely reaction for any new follower will be aggressive rejection, but any mention of the handle will be treated in the same way as the bot's own tweet, with the same mixture of aggression and guilt. A function has been included so as to allow the bot to 'retract' its last tweet, as if in a fit of regret (usually soon followed by more aggression and outrage). The Urban Dictionary API function checks the last blurted word and posts a tweet containing a definition, tags, or even a link to one of the recordings available on the website. In the middle of all that, other reflections, notably on the nature of the project itself, or unrelated quotes or meditations, are thrown into the cauldron.
Two figures emerged as potential reference points for this work: Pierre Guyotat, born in France in 1940, and author of radically experimental works dealing with extreme violence, prostitution, and war, written in a complex and idiosyncratic idiom blending classical, even archaic forms of French with slang and borrowings from Arabic and other languages; and Donatien Alphonse François, Marquis de Sade, whose exploration of violence and outrage is yet unsurpassed in world literature, but who, paradoxically, is also an ardent commentator and critic of the French Enlightenment and the Revolution. Although the topic of bots as well as the question of Internet abuse is an important one, this project was not undertaken as an answer or stance on these issues, but rather as 1) a whim, mostly reacting to the imposed restrictions of the course framework, and 2) a long-standing, recurring interest in insults and abuse and their role in language, and literary expression in particular.
All infos on the Json format here.
Echoes of this project as well, although obviously not in the tone. In her case, she went much further down into the replacement rabbit hole, (she uses json files and codes within strings that she parses afterward to replace elements), for which I had too little time. Certainly an alleyway for development.
For further study: it seems that this would have been a scenario where a knowledge of threading would have helped: you can plan for several discussions / chains of reactions taking place in parallel, without interferring with each other... Could not reach this level yet, but some day soon!
In fact, one the most promising development I can think of now could be to develop this into a schizophrenic bot, with events triggering the birth & death of instances (classes) that would all live and interact with each other within the same bot (one could see two scenarios: one where the entities are named and identifiable by readers, one where they are not, leading to some properly insane tweet feed...).
The same issue arose when dealing with api requests from Urban Dictionary: I was able to devise a fairly complex function that deals with the result of such an api call, making use of the data received and producing tweets from it, but the overall structure is still imperfect, and works only if one search is fully dealt with when the next call is made: I think now that I should have several search objects, each containing one search, and being able to return their results in their own time, before being erased from memory. I started making attempts at developing that, but ran into issues I could not solve before the deadline.
Another bit stuctural question is time. First, the speed of the program, far higher than human levels, which could also lead to the Twitter account being blocked, means that I introduced all sorts of 'pauses' within it, using ofSleepMillis(), which seems like a dirty trick. I would probably need something like threading and a better system for keeping track of events, especially parallel ones.
I did not have the time to venture into more Twitter functionalities, such as retweet for instance, or direct messages (which is not too bad, as the main goal was a robot that interacs mostly with itself). Had I been able to make ofxTwitter work, it might have been possible to make faster progress.
A major step could be to make ofxTwitter work (which would in turn foster a deeper study of Chris Baker's suite of addons ofxHTTP, ofxIO, ofxJSONPRC, etc.). One could envisage a development where instead of having functions doing various jobs, one could have classes instead, thus gradually making the system evolve towards a true schizophrenic bot, with several instances living with it (not only ones that insult and ones that apologize or retract, but also different styles of abuse, using various vocabularies). Another scenario could be to develop a community of bots abusing each other. A look into replacement grammars could be interesting and funny, especially to generate hashtags or more creative insults. Ultimately, more powerful linguistic tools and AI integration is inevitable if one were to push this project forward.