A Short History of Death and the Internet and the deaths we have to come – Transcript

Screen Shot 2016-05-28 at 17.42.42

This talk is a rough transcript of the talk I gave at The Death Forum, a series of events in Manchester that look at contemporary conversations around death, dying and grief. Thanks to Sarah Unwin for the invite, and to artist Ellie Harrison, whose beautiful work was also presented at this event. This is barely touching the surface of a lot of thinking about what we will need to start preparing for as part of end of life care. Thanks to Team Changeist for the brain fuel/feedback.

If it’s not obvious, I spend a lot of time in the future, in particular, telling stories about them to figure out what they mean. In this case, I’ll be talking about the future of death, and how technology might enter into it.

So, I wanted to start this talk with a small story of a possible, potential, future, informed by some of the weak signals that are coming through the cracks, giving us signs of what might be to come. You may open the letter under your seat, read it when you are ready.

Screen Shot 2016-05-28 at 17.43.24.png

You are sitting in a room, in your house. You are older than you are now, maybe a lot more, maybe not so much more. You are nearing the end of your life, and you know that you are, you have been told well in advance by a series of health scans and monitors that started their long, slow, ritual months before today.

The fridge that you reluctantly bought on the advice of your doctor, that was once so good at stopping your midnight feasts, stops calorie counting and gives you what you would like, not what you need. It stops telling your doctor that you have had a lot of sugar today, but instead records onto its log, which will be read by your family after your death, that you enjoyed it, immensely.

The things around you start to burn out, break, and let themselves be damaged, because they are not needed anymore by you. They have reached the end of their intended use, and are not to be used by anyone, anymore. Their data is locked up, far away from where you can see it, eventually used by other toasters, and kettles, and coffee machines, so that they know when they should plan to break down for others in the future.

Screen Shot 2016-05-28 at 17.43.38.png

Somewhere, your children are arguing about who will manage your data estate, not knowing how much you have scheduled for permanent deletion, and who will have access to what. Their letter will be sent once the right conditions are met. Your son, whom you have not spoken to for twenty years, will not have access to any of your data, only some from the years before the argument. Your daughter will have all that you have left behind.

At the moment of death, the camera positioned to check up on you from well meaning relatives flicks from red to amber, indicating that it has stopped broadcasting to anyone but your GP. You were always aware your death would be hidden from your family and friends, to save them the misery, and that it would be wiped if it was anything sudden, or in this case, quiet and slow.

The house goes to sleep, the heating, knowing that you are no longer active anymore, starts to cool. As it receives the slowing data from your heart rate monitor, strapped fastidiously to your wrist, it turns the air conditioning on full blast. The house, given express wishes by you, only calls the ambulance once you are dead.

Slowly, your data is disappearing, day by day as the devices that collect and store it notice the inactivity, or just predict when it should stop. Only a few stores remain, your finances and your health scans that you promised to keep for your insurers. Finally, left only to a few, your photographs and meaningful locations collected over the course of your life are given to the ones you love.

Screen Shot 2016-05-28 at 17.44.06.png

As someone who looks towards the future, I can’t help but think about the things we haven’t thought will be a problem yet, or the things we are developing that might become important in our lives, and therefore important in our deaths. I like to think about the future worlds we might live in to see where these problems might lie, because as futurist and science fiction author Madeline Ashby said in her FutureEverything 2016 talk, ‘your utopia might be someone else’s dystopia’. The things that are right for you, might be devastating for somebody else.

I tell this particular story as a way for us to start thinking about death and our digital lives in a slightly different way, because the internet isn’t just webpages and social media, it’s now becoming things.

Screen Shot 2016-05-28 at 17.44.22.png

You’ve probably heard a little, or a lot, about the Internet of Things, but for those who don’t, this is the act of connecting things – toasters, fridges, sofas – to the internet, so that you can have a supposedly ‘better’ relationship with them and make them more effective and useful for you. I am, to be quite honest, an internet of things sceptic. There are places the internet should perhaps not be, but perhaps it’s too late for that now, particularly when you consider that on the list of things is an bluetooth enabled tampon. The internet making its way into our most private and intimate spaces, unbeknownst to who is watching.

Screen Shot 2016-05-28 at 17.44.38.png

But first, let’s look at death, and the internet, as it is now.

As you’ve probably heard, a recent study announced that by 2098, the number of ‘dead’ Facebook profiles, as in, those of people who have passed away, will outnumber those operated by the still living.

Screen Shot 2016-05-28 at 17.44.49.png

We’re perhaps one of the first generations to have some sort of awareness of an afterlife. Not in the clouds and pearly gates way, but in the fact that when we die, or when the people that we love die, they will go on to live, indirectly, through the digital traces they leave behind.

So what does it mean to know that you are leaving a version of yourself to haunt the networks you once lived with? How do we deal with this very specific kind of grief?

There’s the obvious, sad, presence of the dead on social media sites which no doubt many of you will have encountered. Asking you to wish a happy birthday to a friend that died four years ago, or your mother, hurts in a way that we aren’t used to.

Screen Shot 2016-05-28 at 17.45.01.png

The thing that has always freaked me out isn’t the reminder of birthdays, which is understandably jarring, but the smaller micro interactions, when you’re suggested something, a band perhaps, that they liked it too. It’s a strange case of affairs when a dead relative could accidentally market a specific type of soft drink to you. Suddenly, lots of hidden systems come into the foreground when the systems that you use day to day break, and create these ghosts. As in the case of Retaeh Parsons, a woman whose photograph was used to advertise dating on Facebook, months after her widely reported suicide, her friends and family baffled by the algorithm that Facebook’s third party advertisers had used to cause this to pass.

Facebook have made some concerted efforts to deal with this, for example, a memorial page can be set up for the dead by a dedicated ‘legacy contact’, once you have shown an obituary or newspaper article (which is open to a very unique kind of abuse). But often you don’t know to do that, because sometimes these process aren’t clear, which leaves you with this person, locked in time, forever blindly interacting with you.

Upon request, Twitter can close accounts and provide archives of public Tweets for deceased users. Family members are required to submit a formal request to Twitter’s Trust and Safety department, with Gmail, and Hotmail also requiring death certificates. However, Yahoo will not allow you access at all if a person has died, not unless it’s by court order.

You may have heard of the appearance of services that supposedly look to help you grieve by prolonging their memory, or becoming an immortal yourself, preserved in the great, undying, body of the internet. So I thought I’d take you through a couple.

Screen Shot 2016-05-28 at 17.45.11.png

DeadSocial is a service which will write and schedule messages as part of your ‘digital legacy’ after you’ve gone, contacting your friends from beyond the grave. From their website:

‘if you died of heart disease, you might schedule messages every six months reminding friends to get check-ups. You may leave specific messages for loved ones’ birthdays or for a spouse on your anniversary.’

Screen Shot 2016-05-28 at 17.49.41.png

Another site, Eterni.me, offers us the chance to be ‘virtually immortal’, feeding off our science fiction hopes of one day living forever, in whatever form, and in whatever body.

From their website:

‘What if You could live on forever as a digital avatar? And people in the future could actually interact with your memories, stories and ideas, almost as if they were talking to you?’

They want to preserve for eternity the memories, ideas, creations and stories of billions of people. Again, from their website: ‘Think of it like a library that has people instead of books, or an interactive history of the current and future generations.’ Something they call ‘An invaluable treasure for humanity.’

It all sounds very well and good from an archaeological point of view, but I worry when there is a paid, private, service offering this. Who are they to be the preservers, and how do we know what they will do with it? Whose best interests are actually at heart? What happens if they sunset their service, because though you might want to be immortal, a company’s business model may not.

Screen Shot 2016-05-28 at 17.45.26

Eterni.me might seem strangely familiar to many of you, as it’s similar to that episode of Black Mirror, Be Right Back, in which a dead husband is brought back to life by a service similar to Eterni.me, suggested to a grieving wife by her well meaning friend. Once this recently resurrected version of her husband is out of the box, in the most literal sense, his wife is rightfully freaked out, because no matter how much data she inputs, he will never actually be her husband. In the end, she locks him in the attic, occasionally visiting, perhaps because she cannot bear to see him die again, perhaps because she doesn’t want to forget him. But do we really want that, and who is actually benefitting with the creation of these services? Because if we are to be immortal, who are we being immortal for?

One of my main projects that I work on is Haunted Machines, a research project I started with artist Tobias Revell last year which looks at the proliferation of magical narratives and analogies in the way we talk about technology. We find these metaphors helpful because they often tells us much more about our our own anxieties, hopes and worries about change and uncertainty, and in this case, technology.

Screen Shot 2016-05-28 at 17.45.39

The reason I mention my work around magic is to draw your attention to a branch of it, spiritualism, because in some ways, this is what these services are. Back in the late 1800s, the practice of spiritualism became incredibly popular, particularly with the middle classes. These ranged from parlour games such as the ouija board, which was always originally a board game, to the much more serious, and more dramatic. Born from the enlightenment, when god and the afterlife were being challenged by modern science, people were looking for something beyond themselves. They wanted to talk to the dead. A medium, a channeller of the the dead, would be hired to bring the spirits into the room. It was a horribly exploitative practise, as many of these performances preyed on the grief and suffering of bereaved people, asking a fee for this peculiar and unique service. Chairs would be thrown, lights would switch on and off, and the medium would mutter, whisper, or scream messages to their desperate audience.

In the early 1920s, Scientific American announced a prize to any medium who could demonstrate telekinetic ability under scientific controls. Science and spirituality have never been far apart, ever since alchemy was once classed as a natural science.

On the judging committee was one of my heroes, Harry Houdini, famous illusionist and escape artist. Over the course of this prize, he debunked hundreds of faux-mediums, publishing stories about them explaining exactly how they’d done it for the public to read.

So what does this have to do with technology, and subsequently, the internet? The reason I find these kinds of stories so fascinating is because of their weird, innovative uses of technology to ‘summon’ the dead. One of Houdini’s biggest rivals in this competitions was Mina ‘Margery’ Crandon, otherwise known as The Boston Medium, who used an elaborate rig of bells and whistles to feign the appearance of her dead brother. Once Houdini had worked this out, she started creating increasingly elaborate contraptions to fool him, ending in a box with her hands, legs and neck visibly shackled while still managing to bring about her spirits. Houdini then recreated the box part for part and created the exact same response, rendering MIna’s abilities a very elaborate, though rather impressive, lie.

Screen Shot 2016-05-28 at 17.46.05

This exploitation of the bereaved through technology is not all that different to sites like Eterni.me, or that eerie service from Black Mirror I mentioned earlier. These developers and designers are no different to the Victorian spiritualists to me, in that they sell a false set of hopes to the recently bereaved, who just want to hang on a little longer. There is so much of us on the internet that it is so easy for us to bring back a version of a life, of a person, that will operate as if they had never left, but often only for the right price.  

Screen Shot 2016-05-28 at 17.46.20

One of the areas of technology I worry about the most is a group of technologies that I like to call ‘Means Well’ technology. These are devices, services, products that are designed to help us with sensitive things – anything from suicide, to sexually transmitted diseases, to mental health -, but so often end up failing, with devastating results, because they do not account for the very human, very unpredictable, and very messy ways that we interact with them. The key example of this is the colour changing condom, invented by well meaning teenagers for a science prize, which will change colour when it detects an STD. However, what it doesn’t tell you is how to prepare for this event, or how to react if it happens. It’s never about the technology, but the conversations and behaviours around these things that we think might be solved by technology. We endlessly create new things to make us live better and in this case make death easier to deal with, without properly thinking about how these technologies are changing our perceptions around the things we are trying to place a big, technological, sticking plaster over. In the end, other people’s technology happens to us. How do we have conversations around death when there’s all this technology in the way?

Which leads me neatly to our story at the very beginning, a very good place to start with these sort of things. In this story, I talked about a house connected to the internet in many ways, with the person inside subject to all manners of data collection, from calories, to movement, to body heat. This story is in many ways a utopia, and a dystopia, with ubiquitous, constant data collection balanced out by the subject’s ability to control parts of it. There are still squabbles over which family member gets what, and there is still a small scrap of dignity in a camera, that has had a crucial feature designed in by someone who thought that perhaps your family wouldn’t want to watch you die, but medical science would. In reality, the future does not look all that different to the past.

Screen Shot 2016-05-28 at 17.47.01

There are devices that slowly shut off, and then let your family know, something I modelled very loosely from the brilliant work of Networked Mortality by Willow Brugh, who studies and imagines the structures that we will need to help deal with your corpus and corpse, and distribute it to larger society at the time of your death. It helps the process become less scary, and less overwhelming with a very careful, discreet, and clever use of technology, using small triggers and a network of trusted people using technology rather than technology as saviour alone. It’s fantastic, and truly the best way of tech being used with actual human behaviour and emotions in mind.

In this story, data is deleted, automatically, once it has done its service, a theory I love from social media researcher Nathan Jurgenson, who has argued for the case of ephemeral data, data which has a shelf life and expires once it is no longer needed, or maybe once it has been seen, is then gone. There are more and more steps towards permanence made by designers and developers, with the cloud promising to store everything and never run out of space, to efforts made by certain parts of the web community for a ‘permanent internet’ which allows nothing to be deleted, and therefore nothing can ever ‘die’. What if we just let data have a life too and how long should that be?

Screen Shot 2016-05-28 at 17.47.16

I tell these stories of future possibilities because they are a way for us to work out the questions we really want to ask of the future, and to comfortably, or uncomfortably, rehearse some of the bits we aren’t sure of yet. At Changeist, we do this with designers, engineers, product managers, artists, curators, all sorts of people to help them see a little differently, often using objects that tell stories, like the letters I told you to open at the beginning, that although are mundane and far from the realms of science fiction, help you to embed it in the real, so you can start to imagine what it would be like to live in that reality.

We’ll still probably have letters in 50 years, no matter what techno-utopians might say. Many of these stories may never come about, but it’s a good way to push a subject to its limit, to better look back and see where things might need to be rethought.

There’s a trend in current innovation and design to put out things now and then deal with the consequences later, but I am telling you, an app update is not enough, not to a grieving family. But if we, and the communities that are currently putting out technology, told more stories, and imagined more possible, potential, probable futures, we might learn more about ourselves and the way we will react when things go wrong. So that you can better understand where you can step in and make a change, and be more ethical about the decisions you make.

Screen Shot 2016-05-28 at 17.47.29

I wanted to end on a quote by Anthony Burgess from a Clockwork Orange, arguably one of literature’s greatest dystopias, which I think best sums up this imagining of the future I’m talking about, as something that forever causes you think a little differently about the way it unfolds once you have conjured it into being, even for a moment.

“We can destroy what we have written, but we cannot unwrite it.”

 

Notes (Rants) from Berlin: On Art and Engineering

16409099616_668fe4178f_k

The bauhausarchiv at dusk, which made me deliriously happy

Last week I got to hang out for a week in Berlin with a bunch of terribly smart people for media arts and technology festival transmediale. I think it’s safe to say from anyone who was near me, or heard my stories after, I had a lot of thoughts.

Aside from the endless, ceaseless discussions about algorithms, that I found both useful as a researcher who looks closely at them, and frustrating as someone who looks closely at them, we had one of two really important conversations. However I’m going to quickly note down thoughts on this whole algorithms malarkey first. These are very much notes that I scribbled in panels, on steps, on the S Bahn and plane which aren’t fully formed yet by any means.

One of the realisations I saw slip into view was the very misuse of the word ‘Algorithm’. It hung in the air like dust, and was pulled into almost every conversation like a catch-all explainer for why computational systems were messing with us.

On the opening night, one panellist made magic of the word, enforcing that idea that we don’t know what they do, we can’t control them, and that they go on to invent themselves like an ever replicating organism. This, I’m afraid, is a literal fairytale (oh, wouldn’t it be easier if it was? I’m kidding).

I realised that half the time, this is because the very use of the word ‘algorithm’, the weight of supposed meaning we place behind it by throwing it into these contexts, removes the humans that created them entirely. When we talk about algorithms, we never talk about the person, we talk about the systems they operate on, the systems they facilitate, and then the eventual consequences that fall when they supposedly ‘malfunction’. Throwing this out into twitter, a friend and talented creative technologist Dan Williams succinctly reminded me that we should always be replacing the word ‘algorithms’ with ‘a set of instructions written by someone.’ They are made, and written, by human beings, just like all technology is (a general statement, I know, but it all began with us). By removing that element when talking about them we cease to have any understanding of the culture and social systems behind it, which arguably are the reasons why algorithms are having such a huge impact on our lives. If we’re going to have constructive conversations about the cultural and societal impact of algorithmically mediated culture, we have to always remember that there are humans in it, and not just at the other end of the black box feeling the vibrations. As Matthew Plummer-Fernandez pointed out in a panel on Data Doubles thisyear, the problems with confronting complex computation systems demands a socio-technical, not a purely technical solution.

On a related point, going on to the discussion I had at Transmediale, while I was away, I was reading Illa Reza Nourbakhsh’s Robot Futures (which is excellent), and this came up in his discussion of robotics, drone warfare and accountability:

‘In such a scenario [as drone piloting], accountability for the consequences extends among an operational team far removed from the consequences of their decisions, very limited in their understandings of the perceptual limitations of a complex robot, with a command hierarchy similarly uneducated about the perceptual and control failure modes of the technology they have embraced.’ p.102

Here Nourbakhsh briefly explodes the steps that individual parts of the system take away from each other and their accountability of consequences. This can be applied towards the way we see computational systems and culture, and there’s a supposed clear distinction between those who make and operate computational technologies and those that feel the effects of these decisions. Both seem to misunderstand each other, often to devastating effect, which is where we picked up at the post-Unmonastery discussion in Berlin.

One of the questions I asked at transmediale during a great Unmonastery session, after countless audience members in the festival had asked ‘Is it because you’re an artist that you’re afforded the license to explore/prod/provoke technologies?’ was where engineers are subverting their systems themselves, and why we can’t see them doing so. Where are they testing the limits and consequences of their work? Bound up in corporate structures, the only ‘play’ or expression seen within a certain sector of engineering (to generalise, those that make the systems we feel the brunt of, the Silicon Valley set and beyond) is the play that ultimately lands said company with a product or idea. Testing the system to its limit, making parodic or speculative products, just like we see in the art sphere, are all ways to stoke the imagination of the worker that will go on to create better stuff, not critically analyse the consequences of their actions. Play, in this sense, requires very specialised infrastructure, both physically and institutionally. I’m sure you’ve all heard the realities of ball pits, and Wendy houses, and huge slides, and endless access to new and exciting technology that reside in these mini-cities in California and elsewhere. I’m clear not to put the conventionally defined ‘Hacker’ in this conversation, because, as is quite clear, they are a different classification of engineer to those occupying ‘Silicon’ spaces.

Talking to engineer James Lewis at transmediale about this, he talked on the ‘vacuum’ that exists around engineering, and that there is a culture there, but there’s no way for them to access, or make sense of the social context of engineering because of the problems in translation. Artists often end up using engineers as tools to realise projects, and engineers sometimes don’t understand the necessity for projects that don’t end in something useable. I’m careful not to say ‘not all artists/engineers’ but there’s still a reason why we aren’t understanding each other. I can’t remember the name of the curator or artist in our discussion that defined the difference between art (for its own end) and design/engineering (to be instrumentalised and industrialised) but it was useful it getting down to the foundations of this problem.

As an aside (albeit and very interesting one that deserves more than an aside), James also explained a model he’s been working on that looks at a potential triad of Hacker, Social Activist and Artist. Hackers (I’m aware this term is problematic, don’t worry) test and subvert systems to gain something, to break a system to react against it, and have the capacity to use these skills for social good, which is where the non-technical Social Activist comes in. However, there are problems in communicating the social impact, implications, and possibilities, which is where the Artist would come in. It would be interesting to see where this conversation goes, and what the possibilities of it are.

This is well over 1,000 words of legible rant that I’ve had over the weekend, and part of this is me working all of this stuff out. If you have comments, find me on Twitter, or email me directly.

If you’ve read to the end, well done. I’ve just found out I’m talking at Theorizing The Web this year so I’ll be in New York in April. If you’d like me to talk, or run a workshop, or just hang out and have coffee, get in touch.

The Babadook, Intervention Technology, and Designing With, Not At.

babadook2

Warning: A few light spoilers for The Babadook, but I’ve tried really hard not to ruin it too much.

In Jennifer Kent’s rather remarkably terrifying film, The Babadook, a mother and son slowly regress into their house, away from school and family, held hostage (psychologically, emotionally) by a monster that has leapt straight out of a children’s picture book. An analogy for grief, and its subsequent consumption, the monster, the Babadook, lingers in the house, growing stronger with each denial, more fearsome with each rebuttal. Grief can turn a home into a dark, inhospitable place, left open for monsters to occupy every cupboard, every shadow, every dark and stormy night. As the small, compacted family are forced inside their house, the outside world is less visible, because the world has almost definitely forgotten about them. For many, their grief is too much to deal with, and it’s this loss of control that is the most frightening part of the narrative.

As a film that looks at how paranoia, grief, and desperation manifests, The Babadook is excellent, there is no unnecessary gore or jump tactics (as many reviews have highlighted, with a collective sense of relief) but rather a slow unraveling that ultimately leads to a rather visceral, human climax, with a monster, of various kinds, overwhelming the house. This isn’t a film about the supernatural at all, it’s about people.

At the heart of The Babadook is the focus on the things that we don’t talk about, or don’t see. Kent’s film is rather sympathetic of the social services, but not the unhelpful schools that misunderstand Samuel’s eccentric behavior as disobedient, rather than bright (a story familiar with many), causing the family to be further isolated from the world around them. Adding to that a sister that is more concerned with how she’s perceived socially than to care about her overworked, emotionally fraught sister and you’ve got a recipe for disaster. Aside from thinking about how grief can be the monster in the basement, The Babadook looks at the people who do fall through the cracks, into the basements of the world, never to be seen again. How we see these people, and how we find ways to reach them, is as important a question as it’s ever been, and one that we will eventually want to use technology, potentially, to solve.

Recently there’s been a lot of controversy and criticism of The Samaritan’s app, Radar, and rightfully so, as the app’s core function is to monitor the people in your twitter timeline for key trigger words and phrases, which, on identifying them, alerts you that this person might be in trouble. The conversation surrounding this has raised necessary discussions about what intervention is, where technology should, and shouldn’t be used, and a whole multitude of issues of consent, surveillance, and online communities. Radar is intrusive, and although it’s essentially enabling peer-to-peer surveillance and gross misunderstanding, there’s a reason why Samaritans thought it should exist. They were trying to do a good thing, in a bad, misguided way. Stavvers wrote an excellent summation of why she wants it pulled, for the very specific, very important point that not everyone will have your best interests at hand, ‘not everyone is going to be operating from a position of good faith’. When you’re down, that’s the time for your enemies to kick the hardest, and by accidentally facilitating this, Radar hands the power to those who know exactly how to exploit it.

The Samaritans, who do a spectacular job, saw technology as a way of stepping in before something bad happened, because sometimes it’s hard to tell the warning signs. As I mentioned earlier, in The Babadook, Amelia’s family are barely there, and when they are, they are too wrapped up in the apparently complexities of their own mediocre dilemmas to even notice, or take action, when a cry is heard. So, rather clumsily, Radar is trying to respond to that need to intervene more quickly, more effectively, albeit with an added dose of emotional distance and obligation (see Joanne McNeil’s piece of Facebook adding a layer of obligation to our everyday interactions for an interesting perspective on this). Like many IoT devices, this interaction runs the risk of being yet another thing that we can ignore, or get bored of.

Radar, as a means for intervention, is a key example of techno-solutionism, and not designing with people, but rather at them, without fully anticipating the full set of consequences or problems that could arise, as Stavvers mentioned, this is almost perfectly designed for trolling. In Dan Lockton’s excellent essay ‘As We May Understand’, he stresses an important problem with IoT innovation, that we are seeking to correct, or change behavior, rather than working with people to know what they actually want, designing with, not for:

‘People (‘the public’) are so often seen as targets to have behaviour change ‘done to them’, rather than being included in the design process. This means that the design ‘interventions’ developed end up being designed for a stereotyped, fictional model of the public rather than the nuanced reality.’

For devices that do intervene, or help, Dan suggests ‘helpful ghosts’, stone tapes that provide ‘ambient peer support’ through pre-recorded messages, triggered by a specific series of parameters, thrown carefully into a customised recipe using something like If This Then That. On the surface, this sounds vaguely like Radar, but there’s a hope in this, that it is tailored to the individual, by another individual, and isn’t invasive or awful. The Anti-Clippy (which Dan points out), if you will.

As we look for more ways to make convenient our daily lives through connected objects, we’re almost certainly going to see more in the way of intervention devices. I mean, they already exist, take Glow for example (thanks Meg Rothstein for alerting me to this), a pregnancy app that allows the pregnant person to add their partner to the app, and using the recorded data, sends prompts to make their pregnancy more comfortable. ‘Your partner has recorded that they aren’t feeling too good today, why don’t you get them some flowers?’ What the app doesn’t know (because it doesn’t ask) is that your partner might hate flowers, or is allergic, and might just need a hug, or to be left alone, but this layer of obligation prompts you to be kind in a way that will never feel genuine, because there’s not enough of your relationship there, it’s not personal, or made with you.

What does the future look like here if everything becomes an obligation? Where can we look beyond, to where intervention could be helpful? One of my immediate thoughts was applying this to older people, who might not have left the house in a while, where a gentle nudge to a nearby designated person can let them know to drop them a line. In The Babadook, this comes in the form of a friendly neighbor, who although is at first pushed back, eventually is let in, and in some ways, becomes the subtle, ongoing hero of the story, beyond the close of the film. It’s certainly a level of obligation, but in some way it’s not masking as something else (such as romance, with Glow, or suicide prevention, in the case of Radar), because we do need reminding sometimes. As always, comments welcome.

Glitch = Ghost: Poking at Paranormal Technology.

A screenshot of Digital Dowsing's SLS Camera System.
A screenshot of Digital Dowsing’s SLS Camera System, using Kinect.

It’s Hallowe’en and I haven’t posted in a while, so I thought I’d just type up all of the weird knowledge on a particularly strange collection of technology that I’ve been holding in my head for a while. It’s not neat, or particularly pretty, but it’s something that I’m going to unceremoniously call a brain-dump, because that’s almost certainly what it is.

We’ve already seen a lot of talk lately around haunted machines and homes, the ghosts that are summoned from the network, and as Tobias Revell has mentioned in his talk at this year’s Web Directions conference, the point where ‘any sufficiently advanced hacking is indistinguishable from a haunting’. So, to turn slightly away from that, I wanted to look at something a little less serious, where devices are invented and hacked for reasons that are used to reach slightly beyond our measures and limits of perception, and look briefly at where, and why, that innovation happens.

For the past year or so I’ve been slightly obsessed with television shows about Paranormal Investigation (Ghost Hunters, Ghost Adventures, even the worst incarnation Most Haunted), largely because of the array of technological devices that are constantly, conveyor-belt fashion, pulled out of the investigator’s bags, all of which are assumed to provide evidence of contact from The Great Beyond. Using technology that is either appropriated, or made by self-professed inventors, these objects hold enormous power in the community as the science behind the spectre, with various communities and R&D units popping up in the urgency to collect better, and more substantial evidence of their belief.

The first of these technologies, which I’ll quickly mention, is not so much an adaptation of the device itself but rather a specific interpretation of the recordings the device picks up, communication in the static otherwise known as Electronic Voice Phenomena. Audio is recorded using a handheld dictaphone, the gain is driven up and noise cut out, and voices are found in the ether. This is an exercise in the effects of pattern recognition (pareidolia) and apophenia (making connections and finding meaning in random stimuli or messages) essentially, and the results are always nearly presented with subtitles, subjectively translated by the investigator so you don’t have room to make a decision on what you’ve just heard. Investigators often say that learning to recognise EVPs is like learning a new language, with the keening of their ears a skill that takes time, and commitment, to develop. What the noises are are eventually uncovered through scrutiny, and tell us more about our need to make connections, and find hope and meaning to an existence beyond our own perceptible environment. As psychologist James Alcock has written, EVPs are essentially “the products of hope and expectation; the claims wither away under the light of scientific scrutiny.”

There’s a lot on recorded EVPs, so I won’t go much further into it but have a quick look at the other devices that have floated to the surface from the hours I’ve spent watching television shows and online videos. The descriptions of these are largely taken from GhostStop, one the biggest online distributor of paranormal devices in the world, ‘designed and built by investigators’, and the inventors websites.

Ovilus X: Conceived by Bill Chappell of Digital Dowsing, the Ovilus device converts environmental readings into words and phonetic responses to questions asked by an investigator. Theories suggest that spirits and other paranormal entities may be able to alter the environment using such resources available to them as manipulating electromagnet frequencies and temperature. The Ovilus uses these frequencies to choose a response from a preset database of over 2,000 words. Essentially an intelligent entity will be able to alter the environment in such a way that forces the Ovilus to “speak” an appropriate, relevant, response. Video of Ovilus X here.

SB7 Spirit Box: The B-PSB7 Spirit Box is a tool for attempting communication with alleged paranormal entities. It uses radio frequency sweeps (AM/FM) to generate white noise which theories suggest give some entities the energy they need to be heard. When this occurs you will sometimes hear voices or sounds coming through the static in an attempt to communicate. Video example from an investigator, here.

Mel meter: Created by Gary Galka, and named after his deceased daughter, the Mel Meter is an All in One Paranormal Instrument detects EMF, ambient temperature with a helpful attached red flashlight and EMF radiating antenna. In addition to detecting AC/DC EMF & Temperature changes in the environment, a Mel Meter also uses a mini telescopic antenna to radiate its own independent magnetic Field around the instrument. This EM field can be easily influenced by materials and objects that conduct electricity. Video introduction by inventor Gary Galka, here (beware, a bit vomit inducing).

SLS ‘Structured Light Sensor’ Camera System: Another Digital Dowsing invention which currently isn’t for sale anywhere, uses Kinect’s infrared capabilities alongside temperature sensors to interpret environments and pick up figures that aren’t visually perceivable, turning them into stick figures that we can see. Video here of a figure picked up by the device on Travel Channels Ghost Adventures (you only need to watch the first five minutes).

This is a small sample size of the range of products available for paranormal investigation, and all of them survive in a universe which only listens to, and applies to certain rules. To listen to outside evidence, to hear that EVPs are psychological rather than paranormal phenomena, to know that so often, the voices are nothing more than a recorder recording itself, takes on water that will eventually sink the ship that the community works so hard to keep afloat. To be disproven disbands the community, with all of its supporting infrastructure. A friend of mine once mentioned this was a Jungian concept, this adamant ignorance, so if anyone has the reference, I’d be much obliged. There’s a level of debunking within the community, but it happens in a very controlled, very isolated way.

Debunking happens at an extremely narrow focus, with experts and consultants brought in from within the universe this particular science exists within. Of course, this is nothing new, as most fundamentalist religion operates in this way, as do other systems, but this is all under a set of particularly tight, and recognisable, set of investigative, pseudo-scientific criteria. Investigators will have a control, a set of statistical boundaries, procedures to replicate or reverse engineer a result, and a set code of conduct in order to stop contamination of sound from the participants. Some even have an ethical framework prior to, and after, investigation, which requires debriefing and offering support to the owners, or occupiers, of the supposedly haunted space (Ghost Hunters are the only televised example of this that I’ve come across.) There are academic bodies, most famously the Society for Psychical Research, whose tagline on their website is a quote from C.G.Jung, “I shall not commit the fashionable stupidity of regarding everything I cannot explain as a fraud.”

The pseudo-empirical basis that ghosts leave material shadows, or traces, upon the environment, on which so much of this technology is founded, ultimately tells us more about our own relationship to technology, and the misunderstanding of malfunction and glitches, than anything to suggest that these particular, human, ghosts appear. It suggests others, created and maintained by machines, that we don’t often account for when we make things beyond an initial round of debugging and fine-tuning. The things we create will create other worlds that we only see, or realise, when they haunt us, something that, as I mentioned earlier, is coming further into the foreground as we look at these breakages in the technology we invent.

In a post by Janny Li for Sound Ethnography’s blog, she concludes her time with a set of investigators by summing up the emotional investment that this innovation accomodates; ‘But to ask if ghosts are real is to miss the point of how ghosts are made real by paranormal researchers and how their efforts might provide some insight on the ways in which many Americans think about the life and death, belief and evidence, science and the supernatural.’ Finding things to listen to, can be just as powerful as listening to the things we find.

Notes 5: Baby Futures, Poly OS, and Open Sourcing Private Parts.

tumblr_nczxqaAJTp1thdyeao1_1280

I can totally see why Tobias doesn’t call his weeknotes ‘Weeknotes’ anymore, as I’m pretty sure it’s around a month since I last posted. Undercurrent has finished, and been taken down, with all of the accompanying things learnt (ie. you really appreciate your gallery volunteers when you’re running an exhibition that doesn’t have easy access to them). Slowly settling back into relatively normal life. This is a wee bit of a brain dump since coming back from San Francisco, so bear with me.

Last week I started Baby Futures, a blog to log the various speculative and fully-functioning objects, apps and services designed for our little ones during their early years. This followed a conversation between Farida Vis, Scott Smith and I, where after hearing Farida’s brilliant talk at Improving Reality last year, we were introduced to the world of Gender Reveal Cakes, targeted, assumptive, social media advertising and the speculative notion that Klout could preemptively create a profile for your unborn child, in an attempt to ensure their social media capital. One particular example, Ultrasound, actually emphasises the point that because of our sharing behaviours, unborn children will already have the beginnings of a digital life months before they actually enter the world. There’s something here about what these devices, speculative or entirely real, are telling us about our relationship to the next generation, where issues of control, monitoring and the Quantified Pregnancy have all become particularly interesting to me. And before you ask, because being a woman (and as Farida points out in her research there’s the assumption “is lady = wants baby”), I’m fully aware that there’s a tendency from some to gender this research, so please don’t. I’m welcoming submissions, so drop me a line if you’ve found anything.

What is happening to our personal data has become an extremely important concern in the past year, and rightly so, so what happens when it’s our very personal, intimate, data? Attending Arse Elektronika last week, San Francisco’s Sex and Technology conference at the CSC (I was in town, who wouldn’t?), I got to have some really interesting discussions about elements of the technology, design and innovation I haven’t ever really given consideration to. If you’re a sex toy inventor who is using 3D scanning technologies to create customised sex toys, what happens to that file once it enters the network, especially if you’re a company with a commitment to open source? Hearing from Dr Kristen Stubbs, who specialises in new technologies in body casting, and manufacturers such as CoMingle, we started to have really interesting discussions into how to manage this, through creative commons and so on, and what happens when the systems that store them aren’t secure.

At Arse2014 I also saw a really great talk by Meg and Adam Rothstein on the Gendered Internet of Things, where they pulled apart our assumptions of our technology, gender and sexuality, with interesting case studies on Blade Runner, Her, and the QS app Glow. I’ll post a link to the talk when it’s live, but in short, they looked at our necessity to gender designed objects, and what our choices relaying to gender really means when it comes to technology. We were asked to imagine the speculative relationships between objects, something I’ve been thinking a lot about in regards to our humanising of our devices, as in my opinion, our tendency to attribute ‘life’ to things tells us more about ourselves than the technology. In short, computers don’t die.

I’ve not really written about Her, not really (and I’m not sure I want to), but there’s something rather important in that conversation near the end of the film between Twombly and Samantha, a section Adam pointed out as ‘The Poly conversation in the room’ during his talk. In the film, Artifical Operating System Samantha tells Joaquin Phoenix’s Twombly that she has the ability to love hundreds of people because she’s designed that way. The conversation itself is something that I’ve heard numerous poly friends talk about giving to friends and lovers, however in this instance, Samantha’s ability to love several people at once is a matter of processing power, not human capability. Discussing this with friends, we drew an uncertain conclusion to whether this was indicative of the further world or not, because we barely see any other relationships beyond Twombly’s or Amy’s. Ultimately, there’s a wider conversation here about technology and non-monogamous, non-heteronormative lifestyles. What does software for the polyamorous look like? Why is there not an option to have more than one partner on Facebook? Why do pregnancy apps online allow access to one partner?

*Deep breath* I also made you all a mixtape, not that you’re no-one, but that it’s for all of you.