I am fortunate enough to have received a residency at Eyebeam for Winter / Fall 2012. All Eyebeam resident applicants apply with a project in mind. What is my project? v002.app.
What is v002.app?
An experimental live (video) performance application, with a large focus on improvisational user interface and interaction design.
Why? Because I want a tool that is meant to be played. A tool that doesnt feel like work. A tool that is designed to adapt effortlessly to changing circumstance.
I feel a deep dissatisfaction with the current crop of video performance tools. While most are capable of performing their task (and I use them to the best of my ability), none attempt to approach the core problem of software as instrument. An instrument you can pick up, and instantly join an improvisational moment. Think Jazz.
Too often is time spent creating presets, adjusting windows, importing media, setting effects chains and mapping midi triggers. Once your presets are set, you can move about creatively, but only in the box you’ve made for yourself (if you even have that option) – adjusting the assumptions of your presets takes time with all of these applications, shaving off critical moments from when you want to react to a change in your environment, to when you can finally get that decision out and to the world.
Moments are fleeting, and anything that gets in your way is only going to compound the issue. This is without discussing the subtle issues of our visual cognition, user interaction design, competing visual cues and information overload, hunt a peck flight deck interfaces and incredibly modular UI’s to the point of making F-22 fighter pilots blush.
Do this experiment. Open your performance environment of choice, and make a new empty project. Load your music library and put it on random. Hit play.
Now perform visuals. Quick, get something on screen that fits the mood.
How long does it take you to create a composition? How much flexibility do you really have once you’ve settled on the look / feeling / mood of the first track, to react to whatever happens to come on next? Do you feel behind?
Performing live visuals isn’t easy when you know whom you are performing for, when you have media and presets pre-made, have rehearsed, prepared, and have a known track list. Add live musical improvisation, add uncertainty, add unknown variables, and on top of all of that, try to push yourself to really perform, adapt and play. Can you do more than trigger a movie, scratch a clip or throw seemingly random effects on top of a clip while audio-analysis makes decisions for you?
Is there a language you are trying to speak in? Is your tool fluent?
How do you bend your clip library to match a mood you did not plan for? Make a flashy electro-house motion graphic cliche dance to a different beat and find a home to some country western? Quick, change that preset. Add a layer. Add a mixer. Drop in a mask. Now remove all of that, its a new tune and it demands a new feeling.
Perhaps I am being unfair. Most of this comes down to you, the live visualist. Your taste, your visual aesthetic, particular synaesthetic approaches and how well you know your instrument. Your clip library, the footage you use and how you use it.
Just like any discipline, “garbage in” generally nets you “garbage out”, but the tool and the process the tool enforces on you has huge ramifications even when you are on the top of your game.
How quickly can you react? How dynamic is the tool? How many decisions per frame can you really make? Does it get out of your way?
I only have a few answers, and many more questions – this is a inexorably complex problem with no correct answer. These issues are qualitative, driven by metaphor, ideas, approaches, limitations, even aesthetic assumptions. That said, I have an approach I think is worth sharing:
- An application to create an aesthetic experience must itself be an aesthetic experience. Function follows form – but the function is form – dynamic and changing. This is the most important guiding principle.
- Shorten the time to react to changes. Interface must be non intrusive, non distracting, customizable, & interactions must be contextually relevant to the task being attempted. This is the main pragmatic focus.
- Reduce complexity, repeat the same metaphors for systematic approaches to similar problems.
- Treat the application as an instrument to be played.
And these are some common pitfalls I find myself re-visiting when using practically all live performance software:
- Performance interfaces must be “low latency” – this means:
- Consistent – don’t have to switch mental models, second guess, interpret, etc.
- Non distracting – don’t have to ignore overwhelming UI cues.
- Low complexity – don’t have to deconstruct abstractions.
- Context aware – show only what you need in a contextually relevant fashion.
- Customizable but consistent – adapt to my needs, not the other way around
- Provide an at a glance understanding of state of your composition.
Additionally, I find that
- Layers are constructs and are not necessary, add additional interface overhead and only add an abstraction. Anyone familiar with node based tools knows this.
- Preset philosophies are the antithesis of improvisation.
- Choosing pre-selected media is (generally) the priority – not setting a mood.
- Limited context-aware actions, multiple interface paradigms all encourage “high latency” UX design. I have to stop and think to use a tool, find the pulldown menu, the tab, the disclosure triangle, etc.
Where to go from here? I have some ideas I’ve been working on…