Nonsynth is an experiment in performance poetry. The system works by interpreting as language MIDI signals sent from an electric guitar, allowing the performer to compose poetry by playing chords or melodies. The project is one of a number of my explorations into digital dadaism and “neo-surrealist” language art. These experiments each involve creating a system that computationally scrambles or conjures language, and using the resulting language as material for my own writing. Following it where it leads, expanding on it or gluing it together. The systems are meant to act something as transcendental oracles, inspired by a legacy of writing practice and writing technology that can be traced to the 15th century mystic Abraham Abulafia, and upheld in the dream practices of W.B. Yeats and Salvador Dali. These artists used systematic methods to prime themselves into rich landscapes of imagery and ideas. The multi-sensory nature of the project invites the user to search for language with the idioms of music rather than pen stroke, and the only loosely steerable control scheme positions them as a guide to the flow of language rather than it’s singular source. I’ve found the instrument best used as a performance piece by playing a handful of notes and reading the output, which leads to a somewhat lyrical experience.
An important element of the project is its use of large language models (or LLMs). I remain enthusiastic about their potential for expressivity - their use here is meant to broaden the linguistic landscape of the artist who will use the generated text purely at their own discretion. However, as my research into writing technology begins to touch on sociocode perspectives, I have become worried about the potential of this technology to ‘automate the subject’, and play a central role in future identity formation - identity formation being a area that writing is said to facilitate. By depriving the user of highly agentic control patterns, the system is meant as a confrontation of both our limited control of LLM generated text and our limited agency over our language in general, which is reframed in terms of text’s materiality.
Individual notes played by the performer are translated into words that are sampled from a given text source such as a poem or essay. Additionally, text is infilled with multiple LLM agents with different prompts, each encoding different instructions for interpreting text and different generative roles. Each word is colored according to the sampling strategy that generated it.
Future/ongoing work is to integrate a control scheme that takes into account the relationships between musical notes in terms of their intervals, and maps this information to text selection strategies. Additionally, I plan to implement a series of ‘performers’ which will dance text across the screen in different ways, playing with space and animation, potentially drawing from concrete poetry.