he/they
aka nnirror
I build algorithmic art software and make art with it
MA in Media Arts
University of Michigan Department of Performing Arts Technology, 2025
Facet - live coding for audio, MIDI, OSC, images
Max for Live devices
Hardware / instrument hacking
Wax - web-based audio synthesis environment
Free, open source web application for audio synthesis
Runs in the browser - no installation
Sharing! Previous systems I built were hard to share.
Goal: Build something with a low floor and high ceiling: immediately accessible, CPU efficient, live-codable
The instructions on this screen will coordinate our actions and help us produce different sounds as a group
Embrace the chaos!
Stillness: Hold your phone as steady as possible so we can start from silence
As Loud as Possible: Shake your phone and move it around vigorously.
Up/Down: Follow the screen brightness. bright = up, dark = down
Randomize:Random new phone position when screen color changes, otherwise still
The Wave: Only move your phone when the line is on your side of the room. Otherwise keep it still
Figure 8: try repeating a figure 8 at a different speed than the person next to you
Figure 8: Change your speed up! Slow down or speed up
Make a choice and point your phone:
Keep your phone still and wait until it stops making sound.
Stop audio by locking your phone or turning volume all the way down
Text-based systems with user interfaces beyond text
Flow-based systems with codeboxes everywhere (Max 9.0)
Flow-based programming: Direct control of DSP in real-time, less flexible: big changes often require rewiring connections
Live coding: Higher abstraction of control into text, more flexible
Miller Puckette: "Culture is increasingly transmitted in functionality and not in passive documents"
—"A Case Study in Software for Artists: Max/MSP and Pd," Art++, Editions Hyx, pp. 1–9, 2009.
Web-based art systems can be downloaded, modified, shared, and reconstructed in a matter of seconds
Shorter feedback loop between culture creation and distribution
"Undo" stack: HTML input elements have a default way of doing this, and getting that to play nicely with Wax devices has been tricky
Abstractions: base assumption when developing was 1:1 correspondence between devices and underlying audio graph. So, more for sketching
In both cases, these trace back to initial assumptions
Iteration is the secret!
Huge accumulations of subtle decisions
Without feedback from many other people, Wax would not have been nearly as successful
"Instant music, subtlety later"
—Cook, P.R. (2001). Principles for Designing Computer Music Controllers
You will forget and neglect things
Because you must go deep AND wide in order to build it
Get comfortable with a form of incompleteness
How will children remember today's technology?
Modern phones are insanely powerful. Want to transmit 8-channel audio to a modular synthesizer from an iPhone? NO SWEAT
Most of the time, however, phones don't feel fun
It is important to imagine other futures
nnirror.xyz