This is a transcript of a talk delivered at the MANCEPT Workshops 2017 panel Bio-Hackers, Home Made Cyborgs and Body Modifications: A New Frontier for Ethics and Policy
in September 2017.
In the past fortnight you may have seen headlines like the BBC’s Cyber-flaw Affects 745,000 Pacemakers. (Cyber-flaw
translates as software vulnerability
.) It’s not the first time: in fact, the same manufacturer was in the news around the start of the year after patching a different set of vulnerabilities. Warnings by security researchers go back to 2008. Back in 2015 this was still a fresh and tentative idea for the public at large: Could Hackers Break My Heart Via My Pacemaker? A year later the headlines were getting blunter: Yes, Pacemakers Can Get Hacked. Thousands of Pacemakers and Defibrillators At Risk of Hacking
.
There’s one headline from this year which caught my eye as particularly germane to a workshop about biohacking in the other sense of hacking, not hacking into someone else’s system to subvert it, but hacks to take more effective control of one’s own. It’s from an article introduced as follows: Why can’t Karen Sandler get the source code for her pacemaker? Regulators are waking up to the dangers of hackable medical devices, but one cyborg lawyer still hasn’t seen her device’s code.
Cyborg lawyer
is how Sandler introduced herself in a talk at the 2011 Open Source Convention, in which she described the experience of trying to find out who gets to review the software controlling the pacemaker that keeps her heart going, and of finding out that hardly anyone got to see it, and certainly not herself. The upshot was that one of her slides from the talk reads, I don’t have freedom in my own body
. What proved to be disturbing about becoming a cyborg wasn’t the union of flesh and metal; it was awareness that software has bugs, and software gets hacked (in the intrusive sense), and this software was proprietary and indeed secret.
Sandler wasn’t particularly trying to hack her pacemaker’s software in a creative sense: she wasn’t trying to make her heart beat in Morse Code or anything like that. She wanted her pacemaker to run reliably, get any bugs found and fixed, not have its security compromised, and do its job of keeping her alive.
Whether that’s biohacking depends on certain choices about what’s important in setting up your terminology. It’s not obviously very close to people implanting magnets in themselves, whether because it’s cool or because they have commitments to practical transhumanism.
If you go online and look at the biohack.me wiki, you get told that grinders are passionate individuals
who are motivated to do practical, immediate things to themselves in order to bring the transhumanist future that little bit closer. You also get told that there exists a DIY biology movement which has only limited overlap with the grinder movement.
If you then look at what Alessandro Delfanti writes about under the header of biohacking, he principally links it to the DIYbio movement. Open science. Rebel science
, even. At any rate, definitely science. He construes biohacking through his concerns about privatisation of scientific knowledge. He writes that biohackers’ struggles against Big Bio priesthoods are a challenge against the current distribution of power among science’s institutions
. It’s still not the same as wondering whether you can trust your pacemaker, but there’s a shared interest in open, non-proprietary knowledge about how things work.
Indeed, one of the things Delfanti does is explicitly link biohacking to wider movements—to open access, to free culture—and in particular to draw on what’s known as the hacker ethic, which emerged predominantly among computer enthusiasts. There isn’t really a single canonical formulation, but there are certain recurrent themes. Freely shared knowledge and open exploration are good; locked-down, proprietary information is dubious. Skill and accomplishments are good; credentials are otherwise very ignorable. And mistrusting authority is often wise, since it’s authority that does the locking down.
The term hacker ethic
is commonly credited to a journalist called Steven Levy, who wrote a book in the Eighties about the development of a hacker culture with roots notably in M.I.T. of the Fifties and Sixties.
(Levy’s formulation of the hacker ethic doesn’t quite map onto how philosophers typically delineate ethics: a statement like You can create art and beauty on a computer
looks not only (1) not really controversial nowadays, but (2) like a non-normative statement we’d typically file under aesthetics.)
It’s not a philosophical term of art, although a philosopher named Pekka Himanen wrote a book in which he presents the hacker ethic as a new work ethic that challenges the attitude towards work that has held us in its thrall for so long, the Protestant work ethic
. (The Hacker Ethic and the Spirit of the Information Age, 2001, Random House, p.ix.)
I’m not particularly concerned here with how the hacker ethic relates to any work ethic or economic system, except insofar as it concerns ownership. You recall Sandler saying, I don’t have freedom in my own body
.
In some ways that brings us to something quite unlike the original hacker ethic of 1960s M.I.T., because going back there means looking back to a time before the smartphone era, even before the personal computer era, when your archetypal computer was a mainframe at a university or a big company, with multiple users connecting from terminal clients. Nowadays it’s normal to own a computer. You carry a computer around with you and mediate your social life through it. If you squint from the right angle, having an account on a privately owned social networking site can look a bit like having an account on a Sixties mainframe, but the roles they play in our social existence are very different.
That means some of the fuller-blooded ideas about sharing and openness and accessibility don’t straightforwardly gel with a world in which the word oversharing
has been invented. Some classic accounts of the hacker ethic have had a quite anarchic or even anarchistic flavour, notably where access to computers and information is concerned. Accessibility of a university mainframe connotes something very different from accessibility of one’s ’phone. The ’phone is personal, it’s private.
So it’s a bit unfortunate that there’s a tendency to talk about the hacker ethic, as though it had been written on tablets of stone. It’s possible to end up talking at cross purposes. If you pick up, say, Gabriella Coleman’s book on the ethics and aesthetics of hacking, you’re going to be told that hacker culture is basically part of the liberal tradition of thought. If you come to it via, say, cyberpunk fiction, you’re going to get a rather different and more cynical point of view.
Hacker culture has certainly not been unchanging. It used to be the case that if you were into technology you were an enthusiast, an early adopter. You followed technology news to learn about exciting new things. Nowadays it’s more like following the weather forecast: you’re hoping for sunshine but it’s the nasty weather you’re really looking out for.
So, for example, there was the story about the self-cleaning kitty litter tray. You know how printers tend to be sold on the razor-and-blades model: the printer is quite cheap and the money is made by selling ink cartridges, even though they’re just there to contain ink? Printer manufacturers tend to be old hands at engineering their printers and cartridges to stop you simply refilling the cartridge, or buying a cheaper cartridge from another company. Some have even been known to region-lock printers to stop you buying imported ink from a region with cheaper wholesale prices. So the manufacturers of this fancy kitty litter tray looked at that business model, and decided they were going to lock owners into buying special cartridges of their own cleaning solution. With the result that people began to share methods for hacking their kitty litter trays so they could refill the cartridges with alternative cleaning solutions if they wanted to.
There was a similar case involving coffee machines. They wouldn’t work with whatever coffee you wanted. You were supposed to use the coffee grounds supplied by the manufacturer, in special containers. Each container was sealed with foil, and the machine had an optical scanner to check for an official stamp on the foil. Of course you were meant to throw the foil away, but the simplest hack to load unauthorised coffee involved just keeping it for reuse and shoving it under the scanner again.
Another case—this one is loosely related to biohacking—involved a mask which was supposed to improve the health of your skin by shining light from coloured LEDs onto it. Whether or not that actually worked, it wouldn’t work for long, because a chip inside the device would count the number of uses and stop it working after thirty, at which point you’re supposed to throw it away and buy a new one. So of course somebody worked out how to disable the chip and keep using the mask.
So there’s modern hacker culture for you: it’s not about access to the M.I.T. mainframe, it’s about access to your own property.
Of course with digital goods this stuff has been going on for a long time, and falls under the heading of DRM, for Digital Rights Management, or Digital Restrictions Management to its critics. It’s supposed to stop you doing what copyright law forbids anyway; but what do you do if a DRM program stops you doing what you do have the legal right to do? Well naturally you’ll want to disable the misbehaving software—except that doing so may put you at odds with national laws descending from the WIPO Copyright Treaty. As you can imagine, when stuff gets locked down like that it comes right into conflict with the values of the hacker ethic. In fact it’s sheer HAL 9000: you tell your computer to do something and it says it can’t allow you to do that.
When the stories aren’t about those exciting new business models, they’re about whether apps on your ’phone are spying on you, or your smart TV is spying on you, or your children’s voice-activated toys are spying on you, or your smart thermostat is spying on you. It’s basically common knowledge that online advertising networks track you, which is one of the main reasons for using adblockers, malware protection and annoyance protection being the other two. There were even headlines recently about whether your Roomba might start selling floor plans of your house to interested parties.
So that’s where we are today, with technology sometimes seeming rather like the One Ring: i.e. it’s useful but it serves some other master and so it can’t be trusted. And here we are to talk about introducing technology into one’s body...
Biomodification is the broadly neutral term; bioenhancement obviously less so, but it still basically leaves it up to you to plug in your favoured conception of what does and doesn’t constitute enhancement. Biohacking connotes a particular kind of ethos, a certain kind of attitude and outlook which contrasts with others.
There’s a point sometimes made, for example, about rights, that you claim a right to something if you have concerns about lacking it. If you arrive in some country and you’re told, here we respect your right not to have your fingers chopped off—well, there’s certainly worse news you could hear, but you might start to think, why do they have such a specific right? Am I in some danger of getting my fingers chopped off?
In similar fashion, to understand the development of hacker culture and a hacker ethic, and how terms like software freedom entered usage, you need a sense of what these values are opposing. That doesn’t, of course, mean every hacker is a hacktivist
who espouses the same creed as the Free Software Foundation or the Electronic Frontier Foundation. A hacker has a zeal and a knack for hacking: that’s the basic cultural trait. It typically becomes a politicised trait when its values and priorities come into conflict with someone else’s, whether the formative experience is discovering pacemakers aren’t open to inspection, or trying to play a computer game without an Internet connection and being told the DRM system won’t let you.
Of course you don’t need to have any links to hacker culture to find it unwelcome when your computer refuses to do what you want. However, the fuller your sense of, and pride in, being the local god of your machine, the more keenly you’re going to notice when the machine starts serving Mammon instead of you.
If your cultural reference points are like mine, the word cyborg
probably conjures up, say, characters in Mamoru Oshii’s anime adaptation of Ghost in the Shell reflecting on how the government owns their mechanical bodies, or the bit in the first Deus Ex game where your character’s told he has a remotely activated killswitch inside him. (Of course, this being an academic workshop, it’s possible your cyborg touchstones are from Donna Haraway’s Cyborg Manifesto, or critical deaf studies, or something, and I’ve revealed myself to be terribly lowbrow in comparison, but then I’m sure you’re all far too polite to tell me that.) So what the word evokes is not just the standard question of what it means for one’s humanity to become partly mechanical—and people with pacemakers don’t seem to have any trouble with their humanity—but a quandary about ownership and control and ultimately autonomy.
We think of ourselves as the possessors of our own bodies. Philosophically, that strikes the familiar notes: Locke on how we have property in our own persons and so forth. I’m not sure things have looked quite so clear-cut to lawyers: certainly Web searches for human genome patent
or tattoo copyright
produce rather a lot of results. But when a cyborg lawyer says she doesn’t have freedom in her own body, that resonates, because each of us has a body that’s uniquely, intimately one’s own. It’s how we find ourselves embodied in a world, living among other embodied human beings.
I came up—on something of a whim—with the title Hacker Ethics for Cyborg Appliances because we lump technological hardware into various categories—gadgets and gizmos and so forth—and the body reminds me particularly of a fridge: i.e. what people basically want from theirs is that they should keep on working reliably, but it’s also popular to decorate them, and some people even scribble notes on them. Fridges are also among the many kinds of hardware that are increasingly getting computerised, although when Samsung launched its smart
fridge last November the asking price was about 3000 quid, so Internet-connected fridges may not be coming to a home near you very soon. Still, there’s been at least one case of a fridge being taken over by a botnet and used to send spam e-mails.
So that’s more of a glimpse into a possible future: where that fridge is today, maybe your body will be tomorrow...
The most immediate questions are probably practical: what if, instead of your fridge, it was your pacemaker that got compromised? What if it started scanning nearby wireless networks for vulnerabilities, turning you into a walking malware vector? Would you have a moral responsibility to quarantine yourself? Could people reasonably ask you to? What legal liabilities are involved?
It’s a scary prospect for network security, an even scarier one for bodily autonomy—and it underscores why you might really want to be sure you could get access to the pacemaker’s source code and modify it according to your needs. And so my suggestion, on that cheerful note, is that we should be interested in the ethos and the ethics of hacker culture on the grounds that, when the human body potentially starts to become one more computerised appliance, the most autonomous thing a person can do may be to hack the appliance and creatively explore what can be done with it.