I’ve observed two modes of playing SoundSelf. First there’s a playful mode: The player asks the system “what can you do, how do I play with you?” They dance with their voice, push SoundSelf around, explore its limits. They’re having *fun*. Players always begin in this space, but SoundSelf’s magic happens when they transition into a second mode that I’d describe as “surrender”: Their breathing slows down, their voice falls into repeating rhythms, and they stop thinking.
This is the “trance” I’m always talking about. SoundSelf’s interaction is designed to distract your “inner voice” for long enough that you temporarily fall out of the habit of listening to and identifying with it, thus leaving your sense of identity open to being hacked and expanded. However, not everyone makes it over the hump into the surrender phase of the experience, or they’ll surrender for a few minutes before the inner voice returns.
In zen meditation, when the mind inevitably wanders, you actively reign it in again by deliberately drawing your focus back to your breath. Unfortunately, such a mechanism is not directly available to me as a designer without introducing inelegant symbols and instruction into the experience, which would change the nature of SoundSelf from play-partner to teacher, and a teacher is not what I’m interested in making.
So here’s the challenge: How can SoundSelf slowly seduce you into a deeper and deeper trance, but also catch you when you wander back into a playful frame of mind? I think it comes down to respecting the frame of mind the player is currently in – letting SoundSelf respond intuitively to your voice when you’re in the playful mode, but slowly and subtly leading you and moving with you once you’ve surrendered.
Ideally, SoundSelf would be monitoring the player’s brainwaves or heart beat variance and using that data to change the program. For better or worse this technology isn’t commonly available on commercial peripherals. But SoundSelf does have indirect access to a powerful biometric: your breath.
Imitone (the pitch detection algorithm SoundSelf runs on, which is the pride and joy of our programmer Evan Balster) is very sensitive to tonal sounds like your voice, but it’s not designed for atonal sounds like wind and breath. This is a feature, as it effectively ignores background noise. So while SoundSelf can’t know when you’re breathing or how deep and long your breaths are, it can make an educated guess based on the length of your tones and the space between your tones. Combining a two minute rolling average of four elements…:
… and we get a pretty decent heuristic of how entranced you are, and thus how SoundSelf should behave.
It’s not perfect, and it’s quite sensitive to false positives (imagine if in the middle of a period of long low tones from the player, SoundSelf suddenly interprets a distant bicycle bell as a short high pitched tone), but smearing the measurement out over about two minutes gives me a pretty decent high-latency measurement of where your head’s at, and what SoundSelf should do to gently nudge you deeper.
The original motivation behind Deep Sea was a dirt simple question: how do I maximize immersion? It was a curiosity drive! I started out knowing from my own experience that fear can short-cut the rational mind and touch players at a pre-cognitive level. But all the design decisions, like blinding the player, or playing back their breathing to obscure the critical information, all of that was me blindly reaching into the darkness and holding onto what seemed to work. I’m very fortunate to have stumbled onto some ideas that worked incredibly well, but the great irony of Deep Sea’s development is that I didn’t know why they worked. It took about two years of watching people play Deep Sea for me to reverse-engineer my own game and figure out the why.
SoundSelf is built on those understandings. I’ve since come to see immersion as a function of trance. In other words, immersion is in the same family of experience as hypnosis, meditation, and Pentecostal possession. So while SoundSelf is a radically different game from Deep Sea, as a designer they are both knots in the same thread. Only now, instead of accidentally stumbling into a hypnotism design-space, I see that what I’m doing is literally hypnosis. This is tremendously freeing because I don’t have to depend on crutches like the fear response any more, and I can use these hypnosis techniques to induce ecstasy instead.
Games that reject visuals are rare because most games are about handling data… I think that’s what a lot of people think a game is! Handling data and making information-based decisions is as much a part of the paradigm of this medium as words are to literature. And humans are visual creatures, which means that we can process a lot ofdata in images. Games like Deep Sea and SoundSelf are about getting away from data.
But in terms of accessibility, it’s often thought that Deep Sea is a game for the blind, which it emphatically is not. Deep Sea is a game about weakness and dis-empowerment, and you get that by losing your primary sense. The blind, with their hyper-acute sense of hearing, can “see” straight through the brain-trickery that makes Deep Sea frightening. I definitely think there’s a huge untapped market for games for the blind. If I were doing this stuff for money, I’d be making games for the blind.
I think until we get past this paradigm of games being about interpreting and managing data, controllers will still be based around the dexterous hands and fingers. There are a ton of biofeedback technologies that are mature in their development. The only reason these haven’t been integrated into controllers yet is that market leaders think players more of the same – games about navigating data and making decisions. That’s not what players want, that’s just what the edges of this particular skybox look like. What players actually want are experiences that take them on a journey. Systems for navigating information, we can call those systems “games” if you like, are just a familiar tool for getting there.
What’s exciting though is that VR is already shattering that paradigm. If I were in the console business right now, I’d be looking for a way to get ahead of the curve by integrating heartbeat sensors, breath-tracking, and EEGs into peripherals. The next generation of what we call games will not be about using the body as a means of control. It will be defined by experiences that blur the lines between self and software. This leap is right around the corner.