The Babadook, Intervention Technology, and Designing With, Not At.

babadook2

Warning: A few light spoilers for The Babadook, but I’ve tried really hard not to ruin it too much.

In Jennifer Kent’s rather remarkably terrifying film, The Babadook, a mother and son slowly regress into their house, away from school and family, held hostage (psychologically, emotionally) by a monster that has leapt straight out of a children’s picture book. An analogy for grief, and its subsequent consumption, the monster, the Babadook, lingers in the house, growing stronger with each denial, more fearsome with each rebuttal. Grief can turn a home into a dark, inhospitable place, left open for monsters to occupy every cupboard, every shadow, every dark and stormy night. As the small, compacted family are forced inside their house, the outside world is less visible, because the world has almost definitely forgotten about them. For many, their grief is too much to deal with, and it’s this loss of control that is the most frightening part of the narrative.

As a film that looks at how paranoia, grief, and desperation manifests, The Babadook is excellent, there is no unnecessary gore or jump tactics (as many reviews have highlighted, with a collective sense of relief) but rather a slow unraveling that ultimately leads to a rather visceral, human climax, with a monster, of various kinds, overwhelming the house. This isn’t a film about the supernatural at all, it’s about people.

At the heart of The Babadook is the focus on the things that we don’t talk about, or don’t see. Kent’s film is rather sympathetic of the social services, but not the unhelpful schools that misunderstand Samuel’s eccentric behavior as disobedient, rather than bright (a story familiar with many), causing the family to be further isolated from the world around them. Adding to that a sister that is more concerned with how she’s perceived socially than to care about her overworked, emotionally fraught sister and you’ve got a recipe for disaster. Aside from thinking about how grief can be the monster in the basement, The Babadook looks at the people who do fall through the cracks, into the basements of the world, never to be seen again. How we see these people, and how we find ways to reach them, is as important a question as it’s ever been, and one that we will eventually want to use technology, potentially, to solve.

Recently there’s been a lot of controversy and criticism of The Samaritan’s app, Radar, and rightfully so, as the app’s core function is to monitor the people in your twitter timeline for key trigger words and phrases, which, on identifying them, alerts you that this person might be in trouble. The conversation surrounding this has raised necessary discussions about what intervention is, where technology should, and shouldn’t be used, and a whole multitude of issues of consent, surveillance, and online communities. Radar is intrusive, and although it’s essentially enabling peer-to-peer surveillance and gross misunderstanding, there’s a reason why Samaritans thought it should exist. They were trying to do a good thing, in a bad, misguided way. Stavvers wrote an excellent summation of why she wants it pulled, for the very specific, very important point that not everyone will have your best interests at hand, ‘not everyone is going to be operating from a position of good faith’. When you’re down, that’s the time for your enemies to kick the hardest, and by accidentally facilitating this, Radar hands the power to those who know exactly how to exploit it.

The Samaritans, who do a spectacular job, saw technology as a way of stepping in before something bad happened, because sometimes it’s hard to tell the warning signs. As I mentioned earlier, in The Babadook, Amelia’s family are barely there, and when they are, they are too wrapped up in the apparently complexities of their own mediocre dilemmas to even notice, or take action, when a cry is heard. So, rather clumsily, Radar is trying to respond to that need to intervene more quickly, more effectively, albeit with an added dose of emotional distance and obligation (see Joanne McNeil’s piece of Facebook adding a layer of obligation to our everyday interactions for an interesting perspective on this). Like many IoT devices, this interaction runs the risk of being yet another thing that we can ignore, or get bored of.

Radar, as a means for intervention, is a key example of techno-solutionism, and not designing with people, but rather at them, without fully anticipating the full set of consequences or problems that could arise, as Stavvers mentioned, this is almost perfectly designed for trolling. In Dan Lockton’s excellent essay ‘As We May Understand’, he stresses an important problem with IoT innovation, that we are seeking to correct, or change behavior, rather than working with people to know what they actually want, designing with, not for:

‘People (‘the public’) are so often seen as targets to have behaviour change ‘done to them’, rather than being included in the design process. This means that the design ‘interventions’ developed end up being designed for a stereotyped, fictional model of the public rather than the nuanced reality.’

For devices that do intervene, or help, Dan suggests ‘helpful ghosts’, stone tapes that provide ‘ambient peer support’ through pre-recorded messages, triggered by a specific series of parameters, thrown carefully into a customised recipe using something like If This Then That. On the surface, this sounds vaguely like Radar, but there’s a hope in this, that it is tailored to the individual, by another individual, and isn’t invasive or awful. The Anti-Clippy (which Dan points out), if you will.

As we look for more ways to make convenient our daily lives through connected objects, we’re almost certainly going to see more in the way of intervention devices. I mean, they already exist, take Glow for example (thanks Meg Rothstein for alerting me to this), a pregnancy app that allows the pregnant person to add their partner to the app, and using the recorded data, sends prompts to make their pregnancy more comfortable. ‘Your partner has recorded that they aren’t feeling too good today, why don’t you get them some flowers?’ What the app doesn’t know (because it doesn’t ask) is that your partner might hate flowers, or is allergic, and might just need a hug, or to be left alone, but this layer of obligation prompts you to be kind in a way that will never feel genuine, because there’s not enough of your relationship there, it’s not personal, or made with you.

What does the future look like here if everything becomes an obligation? Where can we look beyond, to where intervention could be helpful? One of my immediate thoughts was applying this to older people, who might not have left the house in a while, where a gentle nudge to a nearby designated person can let them know to drop them a line. In The Babadook, this comes in the form of a friendly neighbor, who although is at first pushed back, eventually is let in, and in some ways, becomes the subtle, ongoing hero of the story, beyond the close of the film. It’s certainly a level of obligation, but in some way it’s not masking as something else (such as romance, with Glow, or suicide prevention, in the case of Radar), because we do need reminding sometimes. As always, comments welcome.

Notes on Technological Sound: Part 1, Future sound and designed objects.

This is the first in a little series of notes about technological sound that I’ll be writing as I’m creating Coralie and I’s new work, Undercurrent, which will be at Brighton Digital Festival this year.

I created the idea behind Undercurrent after thinking a lot on the idea of sonic consequence, what sound means, what it can reveal, and what we forget when we create, or contribute towards a technological landscape. These are largely just me thinking out loud, so if you have any comments, or criticisms, I’d really love to hear from you.

Our Future worlds, as described for us by product videos and future vision demos, are often designed to be silent, non-aggressive, and in keeping with the idea that a non-intrusive environment is best. One that doesn’t make sounds other than those we intend, or want. Walking into our touchscreen kitchens, we are met with silence. The fridge-freezer does not hum, the ceiling light doesn’t buzz, and nothing is functioning in any way other than how it should be. This points to the problem of the shiny, Flatpack, future, as Scott Smith mentions, where nothing is broken, and nothing ever goes wrong.

We know this isn’t so. An air conditioning will let you know, with a friendly chime, that it is working, but on the other side of the wall, where the mechanics of the object interact with the wall, a low, low drone is heard. Perhaps that drone reaches such a level that it starts to have a visceral effect, as evident from certain cases of sick building syndrome, under the right circumstances equipment can emit a frequency below 20hz, infrasound, which can lead to nausea, feelings on unease and stress. Often it’s not as significant as that, it’s rather that if you didn’t hear it, you’d notice. Maybe not immediately, but it’s usually an indicator that something isn’t quite working. These future vision videos, as already looked at by Tobias Revell in his Critical Design lecture, aren’t a real world at all, but still we design them, silent objects and all.

Often, in fact, nearly always, future vision videos don’t have any object sound at all. Most of the time it’s drowned out by that awful aspirational music that you so often hear on adverts, where everyone goes about their future-y business laughing silently into their devices. With the Productivity Future Vision (2011), produced by Microsoft, the only object sounds you can hear are those of the near-silent electric car, the overhead tannoy, and the sound of the woman’s high heels on the pavement . You aren’t even allowed to hear the keyboard being typed, the surfaces being touched, or the sound of a button being depressed. Everything is seen, but not heard.

A part of me thinks that this isn’t any more exciting than designers simply not knowing what they’ll sound like, because essentially, any sonic, unintended properties of a designed object are just a consequence, at worst, an inconvenience. When we design things, we design them to work, and to work well; what they sound like is just part of the process. This I find interesting, because essentially this is all part of contributing towards the technological soundscape that we experience on a daily, rigorous basis. This, like the objects we see and use, has an effect on us, and not just by making us ill or sick in the more extreme circumstances, it’s our experience, and understanding of the world. Sound is one of the ways we feel out our environment, it’s a sense like any other, so when that experience is eradicated, we are left feeling the absence.

Now I’m not proposing any big solutions, but I’m interested in thinking through a few things. Undercurrent was partly inspired by R Murray Schafer’s The Soundscape, who asks (and yes, I’ve pulled this one out before): ‘Which sounds do we want to preserve, encourage, multiply?’ When we unleash a sound into the world, it becomes as important, and as embedded, a technological experience as the way something looks. So what can we look at here?

What does designing for sound do for the process of design? I’d probably be inclined to say that we shouldn’t think this way with design, rather that perhaps there is scope to experiment, in order to understand the impact that it actually has upon our urban and domestic environments.

What does creating, or imagining, a quieter world mean? What place do new sounds have in our vision, or understanding, of our future worlds, let alone our future technologies?

These are just thoughts, incomplete and not fully formed, but I’m getting into the habit of working in public. I’ll write up my other scribblings on anomalous and paranormal sound, hearing the invisible, and destructive sound soon.