Thinking about artefacts – Future Stable, Present Unstable

In the last month, I’ve been spending a lot of time thinking about artefacts, more specifically, those that supposedly belong in the future but for a brief moment, exist in our time. I’ve discussed them with students this year at IED Innovation and Future Thinking for whom I was Visiting tutor, and I’ll be thinking about them again as part of the workshops I’m running with FutureEverything colleagues for the CityVerve project this week, helping those in control of the technology place humans properly into the narrative by using prototypes to explore user need.

There’s a lot of great work out there already, but I wanted to think a bit more on reality, whose it affects, and what bringing about possible realities does to the present world, if only to prod at something in public. As ever, feedback is much appreciated.

In her lecture for LCC’s Speculative and Critical Design Summer School, friend and colleague Georgina Voss talked on ‘reconfiguring reality’ and prospective visions of our future, how artefacts in the way of stock imagery for Old Street’s Tech City/Silicon Roundabout or campaign buses show how expectations create effect. These serve as aspirational objects, and yes I can hear your screams of ‘by whom, for whom?!’ because these are arguably serving the aspirations of a very particular, highly privileged demographic. Much in the way that my fellow Haunted Machines-er Tobias Revell wrote on development renders recently, ‘Images of ‘the’ future hold a powerful grasp over the things we orient towards as we innovate and disrupt our way to a receding hyper-real horizon.’

Voss goes on to talk about how the repositioning or recontextualisation of those images ‘marshall different forces to give you an idea of a different story.’ In her example, the way Tech City dominates a landscape in one map is shown for all its inadequacies by putting it into context with the rest of East London. It’s this repositioning/recontextualisation that the future artefact seems to balance on too, albeit deliberately.

You’ll have to excuse the wanderings of this next part. This is being typed up from two notebooks, one I keep on my at all times, the other one for when I’m trying to make sense of the first. It’s not a very sophisticated method but it works.

Essentially, one of the successful, or perhaps most helpful outcomes of a speculative artefact is that it makes the future stable and the present unstable. In communicating to a wider audience a potential, possible, plausible or potential condition, a future artefact should allow you to see a glimpse into that particular condition while disrupting your view of the condition you are currently in, and in that there is a delicate balance. The interesting thing, when it comes to utilising future/speculative artefacts, is the state in which they exist simultaneously, something which perhaps applies more to placing these artefacts in wild (as aside to a deliberate critical context such as a museum, gallery or design studio), where there are less controlled conditions and a more brittle balance.

As an example, a couple of years ago I had the pleasure to work on Winning Formula with Scott Smith and Near Futures Laboratory for FutureEverything 2014, a speculative newspaper from 2018 that thought on a future in which data becomes far more tied up in sport than it already is. Copies were sent out in Manchester Evening News, the city’s largest free newspaper, and copies were handed out on street corners. Scott went out and spoke to people who happened to pick one up, and took some crafty situational photographs in the process. Of course, as it was outside the gallery (National Football Museum was our official host), there was not that immediate fictional frame set around it, and therefore it hung in a particular, wobbly balance

Which kind of works like this. Very much a work in progress and as someone with no formal training in making diagrams that communicate things, I hope this at least communicates something.

 

Screen Shot 2016-08-08 at 22.05.52.png

That part in the middle acts as a magic circle, a construct in magic I find useful to think about when I think about how future artefacts interact with the world. The magic circle creates the conditions in which ‘magic’ can be done. In this use, the magic circle creates the conditions in which a speculative object can exist in two temporal realities, even when one is unstable (the future). The current environment ‘stabilises’ the potential unstable reality, which in doing so, serves to destabilise that current environment in the process. It’s a trade off, another construct in magic that is essential to consider, if you want something, you’re damn sure something will be taken from you in the long run (aka go watch The Craft).

This delicate tension breaks when the object is revealed as fiction, or when one of the conditions change, though I’m very aware that one of them, the future, is far more amorphous and perhaps needs a better definition than ‘condition change’.

So there are many questions here that I’m going to try and wrestle with in my spare time (ha).

  • Is it fair to say that the future will always remain an unstable condition? At the moment I’m thinking yes, but points of stability would be interesting to consider.
  • Is there a chain reaction here? In using the ‘stable’ current condition to stabilise a future artefact into the present, do we leave the present unstable as a result, following the exercise? What if the future object is never discovered as a fiction, can the present accommodate an ‘alien’ object and what does this do to this relationship?

There’s also the very important matter of ethics. When a future artefact is placed into the wild, without a deliberate framing (such as an exhibit, or with interpretation) de-immersing and debriefing them is incredibly important, otherwise your investigation of the future is built on misleading and hoodwinking those who may potentially occupy it. How you bring others out of your magic circle (a practise in witchcraft that has rules, and considerations too) and disrupt that balance is another thing I’ll have to work out, but I think the critical designers have done some good work on already.

Anyway, there’s a night ramblings on the table. I’ll come back to this, but this is part of my ‘lets put more work in progress on show so that I actually commit to thinking about it’ which was a 2016 resolution.

 

A Short History of Death and the Internet and the deaths we have to come – Transcript

Screen Shot 2016-05-28 at 17.42.42

This talk is a rough transcript of the talk I gave at The Death Forum, a series of events in Manchester that look at contemporary conversations around death, dying and grief. Thanks to Sarah Unwin for the invite, and to artist Ellie Harrison, whose beautiful work was also presented at this event. This is barely touching the surface of a lot of thinking about what we will need to start preparing for as part of end of life care. Thanks to Team Changeist for the brain fuel/feedback.

If it’s not obvious, I spend a lot of time in the future, in particular, telling stories about them to figure out what they mean. In this case, I’ll be talking about the future of death, and how technology might enter into it.

So, I wanted to start this talk with a small story of a possible, potential, future, informed by some of the weak signals that are coming through the cracks, giving us signs of what might be to come. You may open the letter under your seat, read it when you are ready.

Screen Shot 2016-05-28 at 17.43.24.png

You are sitting in a room, in your house. You are older than you are now, maybe a lot more, maybe not so much more. You are nearing the end of your life, and you know that you are, you have been told well in advance by a series of health scans and monitors that started their long, slow, ritual months before today.

The fridge that you reluctantly bought on the advice of your doctor, that was once so good at stopping your midnight feasts, stops calorie counting and gives you what you would like, not what you need. It stops telling your doctor that you have had a lot of sugar today, but instead records onto its log, which will be read by your family after your death, that you enjoyed it, immensely.

The things around you start to burn out, break, and let themselves be damaged, because they are not needed anymore by you. They have reached the end of their intended use, and are not to be used by anyone, anymore. Their data is locked up, far away from where you can see it, eventually used by other toasters, and kettles, and coffee machines, so that they know when they should plan to break down for others in the future.

Screen Shot 2016-05-28 at 17.43.38.png

Somewhere, your children are arguing about who will manage your data estate, not knowing how much you have scheduled for permanent deletion, and who will have access to what. Their letter will be sent once the right conditions are met. Your son, whom you have not spoken to for twenty years, will not have access to any of your data, only some from the years before the argument. Your daughter will have all that you have left behind.

At the moment of death, the camera positioned to check up on you from well meaning relatives flicks from red to amber, indicating that it has stopped broadcasting to anyone but your GP. You were always aware your death would be hidden from your family and friends, to save them the misery, and that it would be wiped if it was anything sudden, or in this case, quiet and slow.

The house goes to sleep, the heating, knowing that you are no longer active anymore, starts to cool. As it receives the slowing data from your heart rate monitor, strapped fastidiously to your wrist, it turns the air conditioning on full blast. The house, given express wishes by you, only calls the ambulance once you are dead.

Slowly, your data is disappearing, day by day as the devices that collect and store it notice the inactivity, or just predict when it should stop. Only a few stores remain, your finances and your health scans that you promised to keep for your insurers. Finally, left only to a few, your photographs and meaningful locations collected over the course of your life are given to the ones you love.

Screen Shot 2016-05-28 at 17.44.06.png

As someone who looks towards the future, I can’t help but think about the things we haven’t thought will be a problem yet, or the things we are developing that might become important in our lives, and therefore important in our deaths. I like to think about the future worlds we might live in to see where these problems might lie, because as futurist and science fiction author Madeline Ashby said in her FutureEverything 2016 talk, ‘your utopia might be someone else’s dystopia’. The things that are right for you, might be devastating for somebody else.

I tell this particular story as a way for us to start thinking about death and our digital lives in a slightly different way, because the internet isn’t just webpages and social media, it’s now becoming things.

Screen Shot 2016-05-28 at 17.44.22.png

You’ve probably heard a little, or a lot, about the Internet of Things, but for those who don’t, this is the act of connecting things – toasters, fridges, sofas – to the internet, so that you can have a supposedly ‘better’ relationship with them and make them more effective and useful for you. I am, to be quite honest, an internet of things sceptic. There are places the internet should perhaps not be, but perhaps it’s too late for that now, particularly when you consider that on the list of things is an bluetooth enabled tampon. The internet making its way into our most private and intimate spaces, unbeknownst to who is watching.

Screen Shot 2016-05-28 at 17.44.38.png

But first, let’s look at death, and the internet, as it is now.

As you’ve probably heard, a recent study announced that by 2098, the number of ‘dead’ Facebook profiles, as in, those of people who have passed away, will outnumber those operated by the still living.

Screen Shot 2016-05-28 at 17.44.49.png

We’re perhaps one of the first generations to have some sort of awareness of an afterlife. Not in the clouds and pearly gates way, but in the fact that when we die, or when the people that we love die, they will go on to live, indirectly, through the digital traces they leave behind.

So what does it mean to know that you are leaving a version of yourself to haunt the networks you once lived with? How do we deal with this very specific kind of grief?

There’s the obvious, sad, presence of the dead on social media sites which no doubt many of you will have encountered. Asking you to wish a happy birthday to a friend that died four years ago, or your mother, hurts in a way that we aren’t used to.

Screen Shot 2016-05-28 at 17.45.01.png

The thing that has always freaked me out isn’t the reminder of birthdays, which is understandably jarring, but the smaller micro interactions, when you’re suggested something, a band perhaps, that they liked it too. It’s a strange case of affairs when a dead relative could accidentally market a specific type of soft drink to you. Suddenly, lots of hidden systems come into the foreground when the systems that you use day to day break, and create these ghosts. As in the case of Retaeh Parsons, a woman whose photograph was used to advertise dating on Facebook, months after her widely reported suicide, her friends and family baffled by the algorithm that Facebook’s third party advertisers had used to cause this to pass.

Facebook have made some concerted efforts to deal with this, for example, a memorial page can be set up for the dead by a dedicated ‘legacy contact’, once you have shown an obituary or newspaper article (which is open to a very unique kind of abuse). But often you don’t know to do that, because sometimes these process aren’t clear, which leaves you with this person, locked in time, forever blindly interacting with you.

Upon request, Twitter can close accounts and provide archives of public Tweets for deceased users. Family members are required to submit a formal request to Twitter’s Trust and Safety department, with Gmail, and Hotmail also requiring death certificates. However, Yahoo will not allow you access at all if a person has died, not unless it’s by court order.

You may have heard of the appearance of services that supposedly look to help you grieve by prolonging their memory, or becoming an immortal yourself, preserved in the great, undying, body of the internet. So I thought I’d take you through a couple.

Screen Shot 2016-05-28 at 17.45.11.png

DeadSocial is a service which will write and schedule messages as part of your ‘digital legacy’ after you’ve gone, contacting your friends from beyond the grave. From their website:

‘if you died of heart disease, you might schedule messages every six months reminding friends to get check-ups. You may leave specific messages for loved ones’ birthdays or for a spouse on your anniversary.’

Screen Shot 2016-05-28 at 17.49.41.png

Another site, Eterni.me, offers us the chance to be ‘virtually immortal’, feeding off our science fiction hopes of one day living forever, in whatever form, and in whatever body.

From their website:

‘What if You could live on forever as a digital avatar? And people in the future could actually interact with your memories, stories and ideas, almost as if they were talking to you?’

They want to preserve for eternity the memories, ideas, creations and stories of billions of people. Again, from their website: ‘Think of it like a library that has people instead of books, or an interactive history of the current and future generations.’ Something they call ‘An invaluable treasure for humanity.’

It all sounds very well and good from an archaeological point of view, but I worry when there is a paid, private, service offering this. Who are they to be the preservers, and how do we know what they will do with it? Whose best interests are actually at heart? What happens if they sunset their service, because though you might want to be immortal, a company’s business model may not.

Screen Shot 2016-05-28 at 17.45.26

Eterni.me might seem strangely familiar to many of you, as it’s similar to that episode of Black Mirror, Be Right Back, in which a dead husband is brought back to life by a service similar to Eterni.me, suggested to a grieving wife by her well meaning friend. Once this recently resurrected version of her husband is out of the box, in the most literal sense, his wife is rightfully freaked out, because no matter how much data she inputs, he will never actually be her husband. In the end, she locks him in the attic, occasionally visiting, perhaps because she cannot bear to see him die again, perhaps because she doesn’t want to forget him. But do we really want that, and who is actually benefitting with the creation of these services? Because if we are to be immortal, who are we being immortal for?

One of my main projects that I work on is Haunted Machines, a research project I started with artist Tobias Revell last year which looks at the proliferation of magical narratives and analogies in the way we talk about technology. We find these metaphors helpful because they often tells us much more about our our own anxieties, hopes and worries about change and uncertainty, and in this case, technology.

Screen Shot 2016-05-28 at 17.45.39

The reason I mention my work around magic is to draw your attention to a branch of it, spiritualism, because in some ways, this is what these services are. Back in the late 1800s, the practice of spiritualism became incredibly popular, particularly with the middle classes. These ranged from parlour games such as the ouija board, which was always originally a board game, to the much more serious, and more dramatic. Born from the enlightenment, when god and the afterlife were being challenged by modern science, people were looking for something beyond themselves. They wanted to talk to the dead. A medium, a channeller of the the dead, would be hired to bring the spirits into the room. It was a horribly exploitative practise, as many of these performances preyed on the grief and suffering of bereaved people, asking a fee for this peculiar and unique service. Chairs would be thrown, lights would switch on and off, and the medium would mutter, whisper, or scream messages to their desperate audience.

In the early 1920s, Scientific American announced a prize to any medium who could demonstrate telekinetic ability under scientific controls. Science and spirituality have never been far apart, ever since alchemy was once classed as a natural science.

On the judging committee was one of my heroes, Harry Houdini, famous illusionist and escape artist. Over the course of this prize, he debunked hundreds of faux-mediums, publishing stories about them explaining exactly how they’d done it for the public to read.

So what does this have to do with technology, and subsequently, the internet? The reason I find these kinds of stories so fascinating is because of their weird, innovative uses of technology to ‘summon’ the dead. One of Houdini’s biggest rivals in this competitions was Mina ‘Margery’ Crandon, otherwise known as The Boston Medium, who used an elaborate rig of bells and whistles to feign the appearance of her dead brother. Once Houdini had worked this out, she started creating increasingly elaborate contraptions to fool him, ending in a box with her hands, legs and neck visibly shackled while still managing to bring about her spirits. Houdini then recreated the box part for part and created the exact same response, rendering MIna’s abilities a very elaborate, though rather impressive, lie.

Screen Shot 2016-05-28 at 17.46.05

This exploitation of the bereaved through technology is not all that different to sites like Eterni.me, or that eerie service from Black Mirror I mentioned earlier. These developers and designers are no different to the Victorian spiritualists to me, in that they sell a false set of hopes to the recently bereaved, who just want to hang on a little longer. There is so much of us on the internet that it is so easy for us to bring back a version of a life, of a person, that will operate as if they had never left, but often only for the right price.  

Screen Shot 2016-05-28 at 17.46.20

One of the areas of technology I worry about the most is a group of technologies that I like to call ‘Means Well’ technology. These are devices, services, products that are designed to help us with sensitive things – anything from suicide, to sexually transmitted diseases, to mental health -, but so often end up failing, with devastating results, because they do not account for the very human, very unpredictable, and very messy ways that we interact with them. The key example of this is the colour changing condom, invented by well meaning teenagers for a science prize, which will change colour when it detects an STD. However, what it doesn’t tell you is how to prepare for this event, or how to react if it happens. It’s never about the technology, but the conversations and behaviours around these things that we think might be solved by technology. We endlessly create new things to make us live better and in this case make death easier to deal with, without properly thinking about how these technologies are changing our perceptions around the things we are trying to place a big, technological, sticking plaster over. In the end, other people’s technology happens to us. How do we have conversations around death when there’s all this technology in the way?

Which leads me neatly to our story at the very beginning, a very good place to start with these sort of things. In this story, I talked about a house connected to the internet in many ways, with the person inside subject to all manners of data collection, from calories, to movement, to body heat. This story is in many ways a utopia, and a dystopia, with ubiquitous, constant data collection balanced out by the subject’s ability to control parts of it. There are still squabbles over which family member gets what, and there is still a small scrap of dignity in a camera, that has had a crucial feature designed in by someone who thought that perhaps your family wouldn’t want to watch you die, but medical science would. In reality, the future does not look all that different to the past.

Screen Shot 2016-05-28 at 17.47.01

There are devices that slowly shut off, and then let your family know, something I modelled very loosely from the brilliant work of Networked Mortality by Willow Brugh, who studies and imagines the structures that we will need to help deal with your corpus and corpse, and distribute it to larger society at the time of your death. It helps the process become less scary, and less overwhelming with a very careful, discreet, and clever use of technology, using small triggers and a network of trusted people using technology rather than technology as saviour alone. It’s fantastic, and truly the best way of tech being used with actual human behaviour and emotions in mind.

In this story, data is deleted, automatically, once it has done its service, a theory I love from social media researcher Nathan Jurgenson, who has argued for the case of ephemeral data, data which has a shelf life and expires once it is no longer needed, or maybe once it has been seen, is then gone. There are more and more steps towards permanence made by designers and developers, with the cloud promising to store everything and never run out of space, to efforts made by certain parts of the web community for a ‘permanent internet’ which allows nothing to be deleted, and therefore nothing can ever ‘die’. What if we just let data have a life too and how long should that be?

Screen Shot 2016-05-28 at 17.47.16

I tell these stories of future possibilities because they are a way for us to work out the questions we really want to ask of the future, and to comfortably, or uncomfortably, rehearse some of the bits we aren’t sure of yet. At Changeist, we do this with designers, engineers, product managers, artists, curators, all sorts of people to help them see a little differently, often using objects that tell stories, like the letters I told you to open at the beginning, that although are mundane and far from the realms of science fiction, help you to embed it in the real, so you can start to imagine what it would be like to live in that reality.

We’ll still probably have letters in 50 years, no matter what techno-utopians might say. Many of these stories may never come about, but it’s a good way to push a subject to its limit, to better look back and see where things might need to be rethought.

There’s a trend in current innovation and design to put out things now and then deal with the consequences later, but I am telling you, an app update is not enough, not to a grieving family. But if we, and the communities that are currently putting out technology, told more stories, and imagined more possible, potential, probable futures, we might learn more about ourselves and the way we will react when things go wrong. So that you can better understand where you can step in and make a change, and be more ethical about the decisions you make.

Screen Shot 2016-05-28 at 17.47.29

I wanted to end on a quote by Anthony Burgess from a Clockwork Orange, arguably one of literature’s greatest dystopias, which I think best sums up this imagining of the future I’m talking about, as something that forever causes you think a little differently about the way it unfolds once you have conjured it into being, even for a moment.

“We can destroy what we have written, but we cannot unwrite it.”

 

Writing: Three Short Futures

I’ve always loved writing stories, from setting myself a challenge last year to write one a day for thirty days set to a theme (I failed, spectacularly, though had fun in doing so), to my own more private experiments that never make it to the light of day.

For Changeist, I’ve written three short near-future narratives on children, data and the Internet of Things to kick off our series of stories over on our Medium channel Phase Change, Three Short Futures. Each set will go along with a theme, and this time I wrote about some of the things that had risen to the surface in my ongoing research into Baby Futures. Paternalism through technology pushing further the notion of ‘helicopter parenting’, potential governmental and educational interventions in data collection of the young, infrastructural changes and new career prospects as information diagnostics in medicine become just as important as a health checkup.

Let me know what you think, I’ll be thinking on more in the coming months.

Talks: Uninvited Guests at Lift16

Last month, Nicolas Nova invited me and my Haunted Machines colleague Tobias Revell (his video here) to speak at Lift16 in Geneva Switzerland. I got to talk about ghosts as technological phenomena, the specific frictions that occur when you don’t think about the potential consequences a technology may result in, and how to use futures, in particular, narrative futures design, to anticipate for them, or at least acknowledge that frictions will occur.

The fantastic Open Transcripts project also transcribed this talk, making it accessible for those from whom videos aren’t always a good choice.

I also, rather obliviously, managed to name my talk after one of my favourite pieces of recent design fiction, Uninvited Guests, by excellent studio Superflux. It’s an incredible example that I often cite when talking about IoT, as it perfectly examines the paternalistic nature of these new technologies, and how they will impact those that might not necessarily want them. Video below.

On the nature of dreams

journey of tech CRAPFUTURES

Nicolas Nova posted a response recently to Crap Future’s diagram (pictured above) outlining the journey of a technology, suggesting where you can remove elements out to see what the journey looks like, and I’ve been thinking a lot about it since, and Julian & James’s original post and diagram. I guess this is a response to a response, and part of my own effort to think out loud.

I spend a lot of time thinking about the narratives and fictions that arise from innovation, particularly those that are willed into being as a justification of that technology, not science fiction as such, but rather idealised user cases, hundreds of ‘people’ created to support these stories, and even when those are broken by their creators, they are still within the narrow, fictionalised world of that technology’s ideation. See Nick Foster’s Future Mundane for some great work on this, mainly about designing away from the hero, but to the supporting, often unnarrated, background cast.

With James and Julian’s diagram I’m mainly intrigued by this recurring feedback loop that happens at the centre, the red part, (if it is a circle, as Nova says, and perhaps not a unidirectional cycle) and at which point the technology exits, and how that eventual exit feeds back into the system to change it, potentially irrevocably. If, as Nicolas suggests, certain elements of this story are potentially ignored which one could say already are (how much stuff is made without consideration of the cautionary tales about it, hoping, vainly, to rewrite the narrative around it?) or made more explicit, how does this channel back into the system for the non-expert. How does this dream evolve, particularly if certain narratives become stronger, more Kickstarters videos will into being a series of literal stories (here is the world that could exist if you fund us) that go on to influence the way we imagine our technology. Arguably, there’s a quasi-directional change in the narratives we choose to dream about technology from science fiction, to advertising, to Product Vision videos, to Kickstart videos, maybe, and curiously, steering further away from the fantastical (which is different to talking about the way it works by the way, which is still see as magic). I can’t help but think about Descartes dream argument when thinking about this, largely as he used the fact we dream at all as a way not to trust the operation, or reliability, of our own senses or functions. Considering reality, and when things move beyond dreams into reality in different contexts, is something that the Cross Quadrant Working Group (whom I may have an association with…), although a thoroughly informal outfit, try to make sense of, where they share or fight over space.

Maybe there are many different versions of this loop, depending on who’s looking at it, and who is using it, and telling the story of their own journey, or perceptions of a journey. Elaborating on Nicolas’s idea, I’d be interested to see a series of different overlays of this diagram, asking different stories, much like the work we’ve been doing at Changeist about shifting users, contexts, applications, and the clashes the result. Could there be a taxonomy of dreams that result from selective journeys and where could new potentials, and new ways to map their potential future contexts be found?

 

Talks – Ghost Stories, Theorizing The Web 2015

So, I was in New York earlier this month giving a couple of talks to some very kind people who asked me to. Below is the talk I gave at Theorizing The Web 2015, a conference which had insanely smart folk talking about a manner of things from police surveillance, to digital gender prothesis, to algorithms as deodands. You should watch the videos here. I was on a panel with Daniel Rourke, Solon Barocas, and Nick Seaver, presided over by Sara M. Watson, and it was really rather enjoyable. Be warned, this talk has an unhealthy amount of silly cultural references because thats how I seem to make my point these days. If you’d rather see me faff over it in person, the video is here.

Haunted Machines was a conference that looked at narratives of magic and technology, with part of it looking where they are used to tell us about the anxieties we have around technology, or where we use them to explore them. I’m quite interested in ghost stories in particular because they can reveal, or be an interesting way, of exploring these anxieties; where the voices in the static are coming from, where the pipes are creaking, and what they tell us about what our technology is doing, and can do.

So I’m going to use a load of slightly ham-fisted contemporary narratives to signpost these. and look at where ghosts exist in two personal, increasingly algorithmically mediated, spaces; the social network and in the home.

MARGERY SEANCE - 2

During victorian times, the rise of spiritualism came from anxieties about the rapidly advancing technological during the industrial revolution, stemming from the loss of control and where we were going with our industrial progress. The above is one of the most famous mediums of the time, the Boston Medium Mina ‘Margery’ Crandon. She was the subject of Houdini’s great spiritualist hunt, where she was tested, ironically, using a series of increasingly sophisticated mechanical artefacts and rigorous technological processes. During this period you also had writers like Edgar Allen Poe writing stories like The Tell Tale Heart, which taps into that fear of not knowing where things are coming from.

Then fast forward a hundred years to the 1950s, where the way that we marketed our technology was as science fiction, the imagined future, to technology being magic in the 80’s and late 90’s.

honeywell

In this advert by Honeywell (see right), a spectral presence breaks from the machines. Emails, you say? How do they work! Solution: Magic!

Apple’s slogan is ‘It just works’ angers me, as it’s pretty preposterous. You don’t need to worry how it works, just that it does, so you don’t need to know what you’re letting in when you allow it to work around you. You are placed as the magical assistant, or rather, the audience member dragged on stage that makes the magician look better. The advert for their most recent public campaign, ‘You’re more powerful than you think’ is nothing but a powerful obfuscation technique, making you think that you are doing the magic when in fact you’re a component.

And in this similar vein, with algorithmically mediated culture, we see where these problems of agency, control and intent create the capacity for ghosts to haunt us. The fact that companies are pushing forward this narrative is an interesting way for us to analyse why pulling apart these technologies is difficult for us.

Student Retaeh Parsons, after months of bullying on and offline, after being the victim of rape, eventually took her own life. Her image was circulated across media channels, blogs, and social media sites, where it was eventually collected into the data banks of an image scraping algorithm used by a third party service. Her photos appeared months later in a Canadian dating advert on Facebook. Her family and those that recognised her, were horrified, and rightly so. When things like this happen, we imagine there is something, or someone, to prevent this behaviour, we don’t anticipate that this decision was governed by an algorithm operating blindly, instructed to gather images of women from a certain, specific demographic.

Algorithms do not know the context of a photograph, they don’t understand, or pre-empt the consequences of their own function. They do not have our faulty methodology, the algorithm is blameless; it is us as creators who are essentially at fault, it is our faulty application. However, this consistent failure to understand the wider systems at work means that the algorithm is quickly becoming an appropriated technology, and becoming a ghost in this respect.

Screen Shot 2015-04-17 at 18.26.58

We are, as Joanne McNeil has written, subject to algorithmic gaslighting, where these unknown systems are ‘algorithmically boxing you into a past while you are trying to move on.’ We are reminded of moments and memories that have been algorithmically served to us that we didn’t want to see, or have a choice in seeing.

Then there’s this Unfriended movie (trailer) that is due to come out in the next month or so. Unfriended is a horror film in which a dead women supposedly ‘haunts’ her friends through various social media channels, which isn’t that different to how death actually operates in social media spaces.

Every year I am reminded of a friend’s death by Facebook’s cheery, unknowing, suggestion to wish them a Happy Birthday. Many of our friends still do. We collectively experience the ghost in the machine language; indirectly encountering and interacting with the social network activity of the dead. There are many occasions where I’ve been subject to the simple statement ‘X also likes this’ on Facebook, when in fact the statement belongs firmly in the past tense.

When you do look at traditional ghost stories, or peoples accounts of the more ‘traditional’ hauntings, you’ll hear the accounts of those spectres that follow you around, knock over the things in your house, but in this case, these technological ghosts disrupt the environment you are making online, enacting emotional violence in its role as a digital poltergeist.

The problem here is that you don’t really see it until you notice it, and once you notice it you don’t know what to do with it. You become powerless. Where is the exorcist in this context, or rather, who?

As another personal space, the retrofitted smart kitchen, once vacated, becomes an algorithmically populated tomb of needs, wants, aspirations. Some paranormal investigators believe that the house is a stone tape, coined by Thomas Charles Lethridge, of a life “stored” in rock, brick, and other items, and “replayed” under certain conditions such as emotional triggers or historic anniversaries or events. Of course, the idea of stone recording memory is under scrutiny, but if there are systems, software and cables inside the stone that are recording your movements, your shopping and temperature preferences, always listening and acting upon your day to day life, then this is no longer a mythology.

tumblr_n4c7hcdAi81subvnlo1_500

While you’re living in it, you are inviting these houseguests in. If they aren’t guessing what you’d like to put in their fridge and allowing you to haunt your loved ones without actually being dead, they are making it hostile when you act against your recorded behaviours. There are the ghosts that don’t want you there, and reject you. It used to be that your house was where you went to shut people out, but here you are shutting systems in with you, enabling the conditions for a haunting.

This hostility has happened to me before where I’ve been locked out of my account because the algorithm that my bank uses to track my ‘normal’ behaviour didn’t expect me to be travelling, but obviously this is a pretty privileged position to be in. Then there’s the more extreme cases, those systems that put together your search history into a potential terrorist threat, much in the way of the story of Michelle Catalano and her husband, who became the targets of a visit by federal officials based on nothing more than their Google history, where terms such as back-pack, pressure cookers and news articles their son had looked up out of curiosity had supposedly placed them as a target of counter-terrorism. In this case, the NSA denied using the data and search history of ‘average’ citizens, although this is definitely up for debate.

And although you can hard reset your kitchen, the profile you’ve built up could still exist if currently cloud-based technologies are anything to go by. Stacks of temporary files mounting up on each other like layers of sediment, each one a frozen profile.

15099560847_c37a62e042_k

Artist Wesley Goatley, in programming his black boxes for Wireless Fidelity, an artwork that maps sounds to SSIDs to sonify a city’s wifi networks, had first hand experience of this phenomena. On the last day, his boxes stopped working, because a file on the usb wifi adaptor was filled up with temporary data and metadata. Now, you can’t access that unless you know where it is, and it doesn’t exist in a place you can access unless you have very specialist knowledge to even know what to look for to diagnose the problem.

Therefore with algorithmically mediated systems, you won’t know where to look to find out where your power is being removed and where you can stop it unless you know where to look. With a house, you know where to tighten the leaking tap, but not necessarily know why your fridge is misunderstanding or assuming your behaviour, or even leaking data and how to stop it.

This language also does a lot to shift the burden from certain groups. When we think of a haunting, such as the reappropriation of Clarke’s third law that Tobias summoned with his work on Haunted Machines – Any sufficiently advanced hacking is indistinguishable from a haunting. We think it’s the hackers that we have to worry about rather than the companies and organisations using algorithms, backdoors into our technology, and enabling biased/prejudices modes of search in our homes. The NSA and GCHQ position themselves as friendly ghosts, house guardians to watch out for us, when we know that this supposed benevolence isn’t the case.

Screen Shot 2015-04-18 at 14.07.53

In this respect, it’s not too far a stretch to imagine those that could potentially control, or have an influence over your house gaslighting you, giving you up, and I don’t mean the shiny smart kitchen world you see in product vision videos, but the systems and controls that could be worked into buildings. Landlords that want to really know what you’re doing in there, and cut off your electricity, and watch who’s subletting that extra room could potentially use these in violence. Smart technologies are the realm of the rich and powerful, and in this respect they are the necromancers, the ones raising ghosts.

So what now?

Screen Shot 2015-04-08 at 22.52.48

In a lot of ghost narratives the ghost goes away, it is either exorcised or given closure (like in the case of Ghost, where Patrick Swayze disappears beyond the veil), but what if you can’t do that? We’ve pretty much realised that algorithmic sorting, mediating, filtering and impact isn’t going to go away, because our contemporary networks are built on it. There are things that we can do to obstruct and confuse it, by fuzz testing and flooding the data, but how do we understand the longevity, and the potential and plausible futures, of algorithmic mediation.

There’s a few examples of this acceptance and enactment of control on systems that we can’t banish in contemporary popular narratives, and please let me know if there are any older ones because I asked a lot of people about spoiling a certain film from last year so I won’t say, but at the end of the film, the ghost doesn’t go away, instead it is kept, with caution, in the basement, where it is fed regularly, and strictly, and consistently understood as a potentially negative influence, while knowing exactly how much rope to give it. At the end of Beetlejuice, Lydia gains control over the ghosts that she cannot get rid of, including this ridiculous scene that Georgina Voss reminded me of where they cause her stuffy parents to dance.

So is there a place for invoking hauntings, to understand your role in creating ghosts? Because, as Derrida reflects (because you can’t discuss hauntings in contemporary culture without dropping him in there somewhere) ‘The ghost remains that which gives one the most to think about – and to do.’

Ghosts are a constant reminder of the people and in the case of technology, the systems that came before, and particularly those ghost futures we once envisaged start to become the ghosts of how we once wanted to live. If we abstract this term to also mean ‘there were things before you, and things after that will become ghosts for others’ then perhaps we can better understand our capacity for haunting, which will in turn allow us to better imagine the better technological futures we want.

So, is there a case for where can ghosts can be helpful, and where can we deliberately insert ghosts into systems to see where there is the capacity for haunting?

I’m not trying to get engineers to write ghost stories, that’s a little too idiosyncratic, but rather advocating for a development of a foresight based socio-technical method that allows us to understand not only the potential ghosts that our technology enables, but the systems that engineering are subject to and where they are haunted. Bringing in anthropologists and artists to help create and play with narratives where the potential and plausible consequences of engineering process can be explored outside of gant charts and product meetings.

So I’m pondering a form of Code-based foresight that allows for cross disciplinary conversations in engineering to happen, creating near futures to know where our technology could end up, and invoke the ghosts that appear as we lose control. As we know, speculative scenarios and foresight are not a new thing, but I wonder where we are allowing engineers space to explore the impact of their work, because computational systems are not neutral, they have political, cultural and social biases written in to their selection and application, but are often treated as being neutral. It’s not about making the products that will use these algorithms, but exploring the algorithms themselves as they are reapproriated, applied, reused, misused.

It’s still something that I’m working on, and I’m interested in speaking to others here about it, so drop me a line on Twitter, or drop me an email.

Here’s the talk where I try and be funny. Thanks Theorizing the Web, you were all great.

Silicon Cargo Cults

Below is a few thoughts that I opened the second session of Haunted Machines at FutureEverything with yesterday. I’ll probably type up a few thoughts on the day itself in due course, but me and Tobias are currently plotting for its future as I write this. Thanks to Paul Graham Raven, Deb Chachra and Eleanor Saitta for their feedback on this.

The worlds in which startup culture survive, along with those dictators of innovation, are encircled by chalk and salt, hiding the methods, the decisions, and the deliberations of those that control our technology. Like Latour’s black box, we see the magicians enter, and the magic emerge, but not the spellcraft that took place. This protect them from what they will eventually summon, from the consequences of their actions, and the mythologies that arise in their place.

To look at these imaginings of innovation, we can turn to the concept of the cargo cult, those groups left in the wake of colonising societies that spring from the belief that rituals, spells or summonings will lead to the gift, and eventual abundance of material wealth and advanced technology. In this society, there is the original, the plane flying overhead, crafting mythologies of the world just out of reach, and with it the will to bring it home. In this form of sympathetic magic, these cargo cults make straw planes in the likeness of real ones, false airstrips from repurposed plots of land, attempting to will them into existence in symbolic form.

We see this in the rhetoric of innovation, where new areas of anticipated technological growth look at the image of the original, the sacred, and see an image of their own to emulate.

Settling in an area, those aspiring to be the ‘Silicon Valley of X’, or the ‘Silicon Roundabout, or village, or swamp’ is a form of sympathetic magic, and in recreating it, a Cargo cult appears. It is the equivalent of building a straw plane to bring real ones, but instead of a plane, they build the world around it, without any understanding of what will cause it to burn. The same, corrupted, structures are replicated without any understanding of the fleshy, real world outside, because the original is sacred, and the original is the divine.

Earlier this year I went on a tour of London’s Silicon Roundabout, and among the complex, mythologised storytelling of the area’s history, the tour guide called upon Matt Biddulph’s naming of the area, said to have been at a party at the opening of the startup Dopplr. In fact, this was a tweet sent over lunch, in sarcasm, which has found itself into the canon and eventual lore of the technology world as an artefact of a summoning.

So how do we corrupt this mythology? How do we dispell, and debunk the magic?

Notes (Rants) from Berlin: On Art and Engineering

16409099616_668fe4178f_k

The bauhausarchiv at dusk, which made me deliriously happy

Last week I got to hang out for a week in Berlin with a bunch of terribly smart people for media arts and technology festival transmediale. I think it’s safe to say from anyone who was near me, or heard my stories after, I had a lot of thoughts.

Aside from the endless, ceaseless discussions about algorithms, that I found both useful as a researcher who looks closely at them, and frustrating as someone who looks closely at them, we had one of two really important conversations. However I’m going to quickly note down thoughts on this whole algorithms malarkey first. These are very much notes that I scribbled in panels, on steps, on the S Bahn and plane which aren’t fully formed yet by any means.

One of the realisations I saw slip into view was the very misuse of the word ‘Algorithm’. It hung in the air like dust, and was pulled into almost every conversation like a catch-all explainer for why computational systems were messing with us.

On the opening night, one panellist made magic of the word, enforcing that idea that we don’t know what they do, we can’t control them, and that they go on to invent themselves like an ever replicating organism. This, I’m afraid, is a literal fairytale (oh, wouldn’t it be easier if it was? I’m kidding).

I realised that half the time, this is because the very use of the word ‘algorithm’, the weight of supposed meaning we place behind it by throwing it into these contexts, removes the humans that created them entirely. When we talk about algorithms, we never talk about the person, we talk about the systems they operate on, the systems they facilitate, and then the eventual consequences that fall when they supposedly ‘malfunction’. Throwing this out into twitter, a friend and talented creative technologist Dan Williams succinctly reminded me that we should always be replacing the word ‘algorithms’ with ‘a set of instructions written by someone.’ They are made, and written, by human beings, just like all technology is (a general statement, I know, but it all began with us). By removing that element when talking about them we cease to have any understanding of the culture and social systems behind it, which arguably are the reasons why algorithms are having such a huge impact on our lives. If we’re going to have constructive conversations about the cultural and societal impact of algorithmically mediated culture, we have to always remember that there are humans in it, and not just at the other end of the black box feeling the vibrations. As Matthew Plummer-Fernandez pointed out in a panel on Data Doubles thisyear, the problems with confronting complex computation systems demands a socio-technical, not a purely technical solution.

On a related point, going on to the discussion I had at Transmediale, while I was away, I was reading Illa Reza Nourbakhsh’s Robot Futures (which is excellent), and this came up in his discussion of robotics, drone warfare and accountability:

‘In such a scenario [as drone piloting], accountability for the consequences extends among an operational team far removed from the consequences of their decisions, very limited in their understandings of the perceptual limitations of a complex robot, with a command hierarchy similarly uneducated about the perceptual and control failure modes of the technology they have embraced.’ p.102

Here Nourbakhsh briefly explodes the steps that individual parts of the system take away from each other and their accountability of consequences. This can be applied towards the way we see computational systems and culture, and there’s a supposed clear distinction between those who make and operate computational technologies and those that feel the effects of these decisions. Both seem to misunderstand each other, often to devastating effect, which is where we picked up at the post-Unmonastery discussion in Berlin.

One of the questions I asked at transmediale during a great Unmonastery session, after countless audience members in the festival had asked ‘Is it because you’re an artist that you’re afforded the license to explore/prod/provoke technologies?’ was where engineers are subverting their systems themselves, and why we can’t see them doing so. Where are they testing the limits and consequences of their work? Bound up in corporate structures, the only ‘play’ or expression seen within a certain sector of engineering (to generalise, those that make the systems we feel the brunt of, the Silicon Valley set and beyond) is the play that ultimately lands said company with a product or idea. Testing the system to its limit, making parodic or speculative products, just like we see in the art sphere, are all ways to stoke the imagination of the worker that will go on to create better stuff, not critically analyse the consequences of their actions. Play, in this sense, requires very specialised infrastructure, both physically and institutionally. I’m sure you’ve all heard the realities of ball pits, and Wendy houses, and huge slides, and endless access to new and exciting technology that reside in these mini-cities in California and elsewhere. I’m clear not to put the conventionally defined ‘Hacker’ in this conversation, because, as is quite clear, they are a different classification of engineer to those occupying ‘Silicon’ spaces.

Talking to engineer James Lewis at transmediale about this, he talked on the ‘vacuum’ that exists around engineering, and that there is a culture there, but there’s no way for them to access, or make sense of the social context of engineering because of the problems in translation. Artists often end up using engineers as tools to realise projects, and engineers sometimes don’t understand the necessity for projects that don’t end in something useable. I’m careful not to say ‘not all artists/engineers’ but there’s still a reason why we aren’t understanding each other. I can’t remember the name of the curator or artist in our discussion that defined the difference between art (for its own end) and design/engineering (to be instrumentalised and industrialised) but it was useful it getting down to the foundations of this problem.

As an aside (albeit and very interesting one that deserves more than an aside), James also explained a model he’s been working on that looks at a potential triad of Hacker, Social Activist and Artist. Hackers (I’m aware this term is problematic, don’t worry) test and subvert systems to gain something, to break a system to react against it, and have the capacity to use these skills for social good, which is where the non-technical Social Activist comes in. However, there are problems in communicating the social impact, implications, and possibilities, which is where the Artist would come in. It would be interesting to see where this conversation goes, and what the possibilities of it are.

This is well over 1,000 words of legible rant that I’ve had over the weekend, and part of this is me working all of this stuff out. If you have comments, find me on Twitter, or email me directly.

If you’ve read to the end, well done. I’ve just found out I’m talking at Theorizing The Web this year so I’ll be in New York in April. If you’d like me to talk, or run a workshop, or just hang out and have coffee, get in touch.

The Babadook, Intervention Technology, and Designing With, Not At.

babadook2

Warning: A few light spoilers for The Babadook, but I’ve tried really hard not to ruin it too much.

In Jennifer Kent’s rather remarkably terrifying film, The Babadook, a mother and son slowly regress into their house, away from school and family, held hostage (psychologically, emotionally) by a monster that has leapt straight out of a children’s picture book. An analogy for grief, and its subsequent consumption, the monster, the Babadook, lingers in the house, growing stronger with each denial, more fearsome with each rebuttal. Grief can turn a home into a dark, inhospitable place, left open for monsters to occupy every cupboard, every shadow, every dark and stormy night. As the small, compacted family are forced inside their house, the outside world is less visible, because the world has almost definitely forgotten about them. For many, their grief is too much to deal with, and it’s this loss of control that is the most frightening part of the narrative.

As a film that looks at how paranoia, grief, and desperation manifests, The Babadook is excellent, there is no unnecessary gore or jump tactics (as many reviews have highlighted, with a collective sense of relief) but rather a slow unraveling that ultimately leads to a rather visceral, human climax, with a monster, of various kinds, overwhelming the house. This isn’t a film about the supernatural at all, it’s about people.

At the heart of The Babadook is the focus on the things that we don’t talk about, or don’t see. Kent’s film is rather sympathetic of the social services, but not the unhelpful schools that misunderstand Samuel’s eccentric behavior as disobedient, rather than bright (a story familiar with many), causing the family to be further isolated from the world around them. Adding to that a sister that is more concerned with how she’s perceived socially than to care about her overworked, emotionally fraught sister and you’ve got a recipe for disaster. Aside from thinking about how grief can be the monster in the basement, The Babadook looks at the people who do fall through the cracks, into the basements of the world, never to be seen again. How we see these people, and how we find ways to reach them, is as important a question as it’s ever been, and one that we will eventually want to use technology, potentially, to solve.

Recently there’s been a lot of controversy and criticism of The Samaritan’s app, Radar, and rightfully so, as the app’s core function is to monitor the people in your twitter timeline for key trigger words and phrases, which, on identifying them, alerts you that this person might be in trouble. The conversation surrounding this has raised necessary discussions about what intervention is, where technology should, and shouldn’t be used, and a whole multitude of issues of consent, surveillance, and online communities. Radar is intrusive, and although it’s essentially enabling peer-to-peer surveillance and gross misunderstanding, there’s a reason why Samaritans thought it should exist. They were trying to do a good thing, in a bad, misguided way. Stavvers wrote an excellent summation of why she wants it pulled, for the very specific, very important point that not everyone will have your best interests at hand, ‘not everyone is going to be operating from a position of good faith’. When you’re down, that’s the time for your enemies to kick the hardest, and by accidentally facilitating this, Radar hands the power to those who know exactly how to exploit it.

The Samaritans, who do a spectacular job, saw technology as a way of stepping in before something bad happened, because sometimes it’s hard to tell the warning signs. As I mentioned earlier, in The Babadook, Amelia’s family are barely there, and when they are, they are too wrapped up in the apparently complexities of their own mediocre dilemmas to even notice, or take action, when a cry is heard. So, rather clumsily, Radar is trying to respond to that need to intervene more quickly, more effectively, albeit with an added dose of emotional distance and obligation (see Joanne McNeil’s piece of Facebook adding a layer of obligation to our everyday interactions for an interesting perspective on this). Like many IoT devices, this interaction runs the risk of being yet another thing that we can ignore, or get bored of.

Radar, as a means for intervention, is a key example of techno-solutionism, and not designing with people, but rather at them, without fully anticipating the full set of consequences or problems that could arise, as Stavvers mentioned, this is almost perfectly designed for trolling. In Dan Lockton’s excellent essay ‘As We May Understand’, he stresses an important problem with IoT innovation, that we are seeking to correct, or change behavior, rather than working with people to know what they actually want, designing with, not for:

‘People (‘the public’) are so often seen as targets to have behaviour change ‘done to them’, rather than being included in the design process. This means that the design ‘interventions’ developed end up being designed for a stereotyped, fictional model of the public rather than the nuanced reality.’

For devices that do intervene, or help, Dan suggests ‘helpful ghosts’, stone tapes that provide ‘ambient peer support’ through pre-recorded messages, triggered by a specific series of parameters, thrown carefully into a customised recipe using something like If This Then That. On the surface, this sounds vaguely like Radar, but there’s a hope in this, that it is tailored to the individual, by another individual, and isn’t invasive or awful. The Anti-Clippy (which Dan points out), if you will.

As we look for more ways to make convenient our daily lives through connected objects, we’re almost certainly going to see more in the way of intervention devices. I mean, they already exist, take Glow for example (thanks Meg Rothstein for alerting me to this), a pregnancy app that allows the pregnant person to add their partner to the app, and using the recorded data, sends prompts to make their pregnancy more comfortable. ‘Your partner has recorded that they aren’t feeling too good today, why don’t you get them some flowers?’ What the app doesn’t know (because it doesn’t ask) is that your partner might hate flowers, or is allergic, and might just need a hug, or to be left alone, but this layer of obligation prompts you to be kind in a way that will never feel genuine, because there’s not enough of your relationship there, it’s not personal, or made with you.

What does the future look like here if everything becomes an obligation? Where can we look beyond, to where intervention could be helpful? One of my immediate thoughts was applying this to older people, who might not have left the house in a while, where a gentle nudge to a nearby designated person can let them know to drop them a line. In The Babadook, this comes in the form of a friendly neighbor, who although is at first pushed back, eventually is let in, and in some ways, becomes the subtle, ongoing hero of the story, beyond the close of the film. It’s certainly a level of obligation, but in some way it’s not masking as something else (such as romance, with Glow, or suicide prevention, in the case of Radar), because we do need reminding sometimes. As always, comments welcome.

Glitch = Ghost: Poking at Paranormal Technology.

A screenshot of Digital Dowsing's SLS Camera System.
A screenshot of Digital Dowsing’s SLS Camera System, using Kinect.

It’s Hallowe’en and I haven’t posted in a while, so I thought I’d just type up all of the weird knowledge on a particularly strange collection of technology that I’ve been holding in my head for a while. It’s not neat, or particularly pretty, but it’s something that I’m going to unceremoniously call a brain-dump, because that’s almost certainly what it is.

We’ve already seen a lot of talk lately around haunted machines and homes, the ghosts that are summoned from the network, and as Tobias Revell has mentioned in his talk at this year’s Web Directions conference, the point where ‘any sufficiently advanced hacking is indistinguishable from a haunting’. So, to turn slightly away from that, I wanted to look at something a little less serious, where devices are invented and hacked for reasons that are used to reach slightly beyond our measures and limits of perception, and look briefly at where, and why, that innovation happens.

For the past year or so I’ve been slightly obsessed with television shows about Paranormal Investigation (Ghost Hunters, Ghost Adventures, even the worst incarnation Most Haunted), largely because of the array of technological devices that are constantly, conveyor-belt fashion, pulled out of the investigator’s bags, all of which are assumed to provide evidence of contact from The Great Beyond. Using technology that is either appropriated, or made by self-professed inventors, these objects hold enormous power in the community as the science behind the spectre, with various communities and R&D units popping up in the urgency to collect better, and more substantial evidence of their belief.

The first of these technologies, which I’ll quickly mention, is not so much an adaptation of the device itself but rather a specific interpretation of the recordings the device picks up, communication in the static otherwise known as Electronic Voice Phenomena. Audio is recorded using a handheld dictaphone, the gain is driven up and noise cut out, and voices are found in the ether. This is an exercise in the effects of pattern recognition (pareidolia) and apophenia (making connections and finding meaning in random stimuli or messages) essentially, and the results are always nearly presented with subtitles, subjectively translated by the investigator so you don’t have room to make a decision on what you’ve just heard. Investigators often say that learning to recognise EVPs is like learning a new language, with the keening of their ears a skill that takes time, and commitment, to develop. What the noises are are eventually uncovered through scrutiny, and tell us more about our need to make connections, and find hope and meaning to an existence beyond our own perceptible environment. As psychologist James Alcock has written, EVPs are essentially “the products of hope and expectation; the claims wither away under the light of scientific scrutiny.”

There’s a lot on recorded EVPs, so I won’t go much further into it but have a quick look at the other devices that have floated to the surface from the hours I’ve spent watching television shows and online videos. The descriptions of these are largely taken from GhostStop, one the biggest online distributor of paranormal devices in the world, ‘designed and built by investigators’, and the inventors websites.

Ovilus X: Conceived by Bill Chappell of Digital Dowsing, the Ovilus device converts environmental readings into words and phonetic responses to questions asked by an investigator. Theories suggest that spirits and other paranormal entities may be able to alter the environment using such resources available to them as manipulating electromagnet frequencies and temperature. The Ovilus uses these frequencies to choose a response from a preset database of over 2,000 words. Essentially an intelligent entity will be able to alter the environment in such a way that forces the Ovilus to “speak” an appropriate, relevant, response. Video of Ovilus X here.

SB7 Spirit Box: The B-PSB7 Spirit Box is a tool for attempting communication with alleged paranormal entities. It uses radio frequency sweeps (AM/FM) to generate white noise which theories suggest give some entities the energy they need to be heard. When this occurs you will sometimes hear voices or sounds coming through the static in an attempt to communicate. Video example from an investigator, here.

Mel meter: Created by Gary Galka, and named after his deceased daughter, the Mel Meter is an All in One Paranormal Instrument detects EMF, ambient temperature with a helpful attached red flashlight and EMF radiating antenna. In addition to detecting AC/DC EMF & Temperature changes in the environment, a Mel Meter also uses a mini telescopic antenna to radiate its own independent magnetic Field around the instrument. This EM field can be easily influenced by materials and objects that conduct electricity. Video introduction by inventor Gary Galka, here (beware, a bit vomit inducing).

SLS ‘Structured Light Sensor’ Camera System: Another Digital Dowsing invention which currently isn’t for sale anywhere, uses Kinect’s infrared capabilities alongside temperature sensors to interpret environments and pick up figures that aren’t visually perceivable, turning them into stick figures that we can see. Video here of a figure picked up by the device on Travel Channels Ghost Adventures (you only need to watch the first five minutes).

This is a small sample size of the range of products available for paranormal investigation, and all of them survive in a universe which only listens to, and applies to certain rules. To listen to outside evidence, to hear that EVPs are psychological rather than paranormal phenomena, to know that so often, the voices are nothing more than a recorder recording itself, takes on water that will eventually sink the ship that the community works so hard to keep afloat. To be disproven disbands the community, with all of its supporting infrastructure. A friend of mine once mentioned this was a Jungian concept, this adamant ignorance, so if anyone has the reference, I’d be much obliged. There’s a level of debunking within the community, but it happens in a very controlled, very isolated way.

Debunking happens at an extremely narrow focus, with experts and consultants brought in from within the universe this particular science exists within. Of course, this is nothing new, as most fundamentalist religion operates in this way, as do other systems, but this is all under a set of particularly tight, and recognisable, set of investigative, pseudo-scientific criteria. Investigators will have a control, a set of statistical boundaries, procedures to replicate or reverse engineer a result, and a set code of conduct in order to stop contamination of sound from the participants. Some even have an ethical framework prior to, and after, investigation, which requires debriefing and offering support to the owners, or occupiers, of the supposedly haunted space (Ghost Hunters are the only televised example of this that I’ve come across.) There are academic bodies, most famously the Society for Psychical Research, whose tagline on their website is a quote from C.G.Jung, “I shall not commit the fashionable stupidity of regarding everything I cannot explain as a fraud.”

The pseudo-empirical basis that ghosts leave material shadows, or traces, upon the environment, on which so much of this technology is founded, ultimately tells us more about our own relationship to technology, and the misunderstanding of malfunction and glitches, than anything to suggest that these particular, human, ghosts appear. It suggests others, created and maintained by machines, that we don’t often account for when we make things beyond an initial round of debugging and fine-tuning. The things we create will create other worlds that we only see, or realise, when they haunt us, something that, as I mentioned earlier, is coming further into the foreground as we look at these breakages in the technology we invent.

In a post by Janny Li for Sound Ethnography’s blog, she concludes her time with a set of investigators by summing up the emotional investment that this innovation accomodates; ‘But to ask if ghosts are real is to miss the point of how ghosts are made real by paranormal researchers and how their efforts might provide some insight on the ways in which many Americans think about the life and death, belief and evidence, science and the supernatural.’ Finding things to listen to, can be just as powerful as listening to the things we find.