“While you are in SoundSelf, you dwell in a place which isn’t sleep and isn’t wakefulness. A level of consciousness wraps you which is both extraordinary, and utterly normal. SoundSelf might be the ground floor of an Oculus-biofeedback field to be, or simply an aid to a meditative practice already established.”
This is a truism that my programmer Evan shared with me just a few weeks ago as I was bemoaning how long it’s taking to finish the game. Making a video game is hard. It’s really hard. It’s a lot harder than I thought, going into the project! One of the most common questions I’m asked by backers is “when will it be out?” so I want to take a moment to describe where we’re at and where we’re going:
Almost all of the content is in place. The game works phenomenally well. But those last little details that make the experience feel seamless from launching the program to closing it take a lot of attention to detail, and they’re important to making the player feel held and safe as they step into an unusual experience.
Here’s a sample of our to-do list:
Our initial plan was to launch SoundSelf at the same time as the Oculus Rift and roll the project into stories about VR, helping our audience find the game that way. Unfortunately we have missed that boat! Which is fine, but it means that once the game is ready, we are going to take time to strategize releasing the project in such a way that helps people find it.
Fortunately, we’re already ahead of the curve on some of the more complicated Things To Do. Our trailer has been shot and edited, for example. It’s really cool.
Well… now if you’d like! The alpha is relatively up to date, and can be bought from www.soundselfgame.com
Right now, SoundSelf runs best on PCs – we’re working out some kinks in Mac OS that’ll be resolved by launch (would love to get your error reports, for those of you on a Mac). Here are the min specs for PC:
Windows 7 SP1 or higher
Nvidia GTX 970 GPU or equivalent
Intel i5-4590 or better CPU
8GB of ram
Headphones you like (you must use headphones)
I recommend playing in VR (we will be supporting Oculus, Vive, PSVR, and potentially the other lower-spec VR headsets) or with a projector in a dark room.
We are no longer attempting to port SoundSelf to mobile for Gear VR. It’s just too much of a headache to get it to a reasonable framerate.
When you scuba-dive, a good snorkel mask makes all the difference in the world. Your eyes are your the most finely tuned and vulnerable perceptual instrument, and the job of a mask is both to protect them and to provide an environment in which they can function. If water leaks in, if the masks fog up, if it presses too firmly into your head, you are no longer immersed in a magical underwater world. Instead you are uncomfortably aware of your land-lubber body as it falters in an environment it didn’t evolve for.
VR has been around for a while, but what Oculus has done is the equivalent of creating a better scuba mask: one that doesn’t constantly spray salt-water into your eyes. And in my opinion their most important innovations have been in head-tracking. This means the in-game camera gracefully matches the movement of your head. It’s a complex challenge, and that Oculus has finally gotten it right is the reason so many of us in the tech world are confident that VR is here to stay.
This isomorphic treatment of head movement makes perfect sense for a symbolic virtual reality – one in which the imagined self on the other side of the VR veil is a binocular animal in a universe with three spatial dimensions. But the same assumptions cannot be made for a truly abstract virtual reality like SoundSelf. In SoundSelf, the participant isn’t submerged like a scuba diver into a magical vision of of our own universe – they are instead invited to leave their body behind and forget that they’re human at all. The trouble with an isomorphic head-tracking treatment for SoundSelf is that it makes the participant aware of their body, aware of the visualization as a three dimensional “space” in which they “occupy.” And while it doesn’t totally break the experience, it’s like a bungee connecting the abstract perceptual flow to the memory of being human.
So the question remains: what do we do with head tracking?
By far the most common support inquiries for SoundSelf’s alpha surround our lack of support for head tracking. For better or worse, people expect head tracking in virtual reality experiences, abstract or not, and when head movements don’t provoke a virtual response they think something is broken. So ditching head tracking is not a serious option.
The most attractive technique we explored mapped geometry movement and color to head acceleration. Like rosary beads or whirling dervishes, repeated physical actions can deepen a trance, and I loved the idea of encouraging rhythmic head rocking. These experiments felt great while I tested it for about half an hour: there is a gentle intimacy to feeling the world spin and brighten as your head moves from side to side. But there was one disastrous flaw: upon taking off the VR headset and returning to reality, I felt dizzier than I’ve ever felt in my life. I couldn’t see straight or drive safely for about five hours. Disastrous.
The solution we finally settled on is to keep the visualization stationary in the center of vision, but wrap it in a sphere that *does* respond to head movements isomorphically. SoundSelf begins with the sphere entirely visible, but as the experience gets under way it slowly fades out, coming back into prominence again only when the participant moves their head. It communicates to the participant, “yes, I’m here, nothing is broken” without compromising the abstraction of the virtual space. Players have reported feeling the world respond to their head movement, but not being able to put their finger on what it is that gives that impression.
You’ll have to look closely at the video below to see it. It’s quite subtle outside of the head mounted display.
There’s no getting around the fact that SoundSelf’s implementation of head tracking is in place exclusively to acknowledge user expectations. Head tracking does not directly enhance the experience as it does for most virtual worlds. I know the question on the tip of many of your tongues is “why then call it VR?” The simple answer is that while SoundSelf works without a VR peripheral like the Oculus Rift, it achieves its unique goals with tremendously greater force when it fills the visual field. To me, what makes the Oculus Rift an attractive device for SoundSelf is that it offers a much higher bandwidth passage from image to mind – a distinction that is mostly cosmetic for experiences structured around goals or story, but core to an experience that is structured around sense-perception directly.
I’ve observed two modes of playing SoundSelf. First there’s a playful mode: The player asks the system “what can you do, how do I play with you?” They dance with their voice, push SoundSelf around, explore its limits. They’re having *fun*. Players always begin in this space, but SoundSelf’s magic happens when they transition into a second mode that I’d describe as “surrender”: Their breathing slows down, their voice falls into repeating rhythms, and they stop thinking.
This is the “trance” I’m always talking about. SoundSelf’s interaction is designed to distract your “inner voice” for long enough that you temporarily fall out of the habit of listening to and identifying with it, thus leaving your sense of identity open to being hacked and expanded. However, not everyone makes it over the hump into the surrender phase of the experience, or they’ll surrender for a few minutes before the inner voice returns.
In zen meditation, when the mind inevitably wanders, you actively reign it in again by deliberately drawing your focus back to your breath. Unfortunately, such a mechanism is not directly available to me as a designer without introducing inelegant symbols and instruction into the experience, which would change the nature of SoundSelf from play-partner to teacher, and a teacher is not what I’m interested in making.
So here’s the challenge: How can SoundSelf slowly seduce you into a deeper and deeper trance, but also catch you when you wander back into a playful frame of mind? I think it comes down to respecting the frame of mind the player is currently in – letting SoundSelf respond intuitively to your voice when you’re in the playful mode, but slowly and subtly leading you and moving with you once you’ve surrendered.
Ideally, SoundSelf would be monitoring the player’s brainwaves or heart beat variance and using that data to change the program. For better or worse this technology isn’t commonly available on commercial peripherals. But SoundSelf does have indirect access to a powerful biometric: your breath.
Imitone (the pitch detection algorithm SoundSelf runs on, which is the pride and joy of our programmer Evan Balster) is very sensitive to tonal sounds like your voice, but it’s not designed for atonal sounds like wind and breath. This is a feature, as it effectively ignores background noise. So while SoundSelf can’t know when you’re breathing or how deep and long your breaths are, it can make an educated guess based on the length of your tones and the space between your tones. Combining a two minute rolling average of four elements…:
… and we get a pretty decent heuristic of how entranced you are, and thus how SoundSelf should behave.
It’s not perfect, and it’s quite sensitive to false positives (imagine if in the middle of a period of long low tones from the player, SoundSelf suddenly interprets a distant bicycle bell as a short high pitched tone), but smearing the measurement out over about two minutes gives me a pretty decent high-latency measurement of where your head’s at, and what SoundSelf should do to gently nudge you deeper.
The original motivation behind Deep Sea was a dirt simple question: how do I maximize immersion? It was a curiosity drive! I started out knowing from my own experience that fear can short-cut the rational mind and touch players at a pre-cognitive level. But all the design decisions, like blinding the player, or playing back their breathing to obscure the critical information, all of that was me blindly reaching into the darkness and holding onto what seemed to work. I’m very fortunate to have stumbled onto some ideas that worked incredibly well, but the great irony of Deep Sea’s development is that I didn’t know why they worked. It took about two years of watching people play Deep Sea for me to reverse-engineer my own game and figure out the why.
SoundSelf is built on those understandings. I’ve since come to see immersion as a function of trance. In other words, immersion is in the same family of experience as hypnosis, meditation, and Pentecostal possession. So while SoundSelf is a radically different game from Deep Sea, as a designer they are both knots in the same thread. Only now, instead of accidentally stumbling into a hypnotism design-space, I see that what I’m doing is literally hypnosis. This is tremendously freeing because I don’t have to depend on crutches like the fear response any more, and I can use these hypnosis techniques to induce ecstasy instead.
Games that reject visuals are rare because most games are about handling data… I think that’s what a lot of people think a game is! Handling data and making information-based decisions is as much a part of the paradigm of this medium as words are to literature. And humans are visual creatures, which means that we can process a lot ofdata in images. Games like Deep Sea and SoundSelf are about getting away from data.
But in terms of accessibility, it’s often thought that Deep Sea is a game for the blind, which it emphatically is not. Deep Sea is a game about weakness and dis-empowerment, and you get that by losing your primary sense. The blind, with their hyper-acute sense of hearing, can “see” straight through the brain-trickery that makes Deep Sea frightening. I definitely think there’s a huge untapped market for games for the blind. If I were doing this stuff for money, I’d be making games for the blind.
I think until we get past this paradigm of games being about interpreting and managing data, controllers will still be based around the dexterous hands and fingers. There are a ton of biofeedback technologies that are mature in their development. The only reason these haven’t been integrated into controllers yet is that market leaders think players more of the same – games about navigating data and making decisions. That’s not what players want, that’s just what the edges of this particular skybox look like. What players actually want are experiences that take them on a journey. Systems for navigating information, we can call those systems “games” if you like, are just a familiar tool for getting there.
What’s exciting though is that VR is already shattering that paradigm. If I were in the console business right now, I’d be looking for a way to get ahead of the curve by integrating heartbeat sensors, breath-tracking, and EEGs into peripherals. The next generation of what we call games will not be about using the body as a means of control. It will be defined by experiences that blur the lines between self and software. This leap is right around the corner.