Empathy and neuroethics

Let’s talk about our future brains. And bodies, really.

Credit: Wyss Center

I’m talking about safeguarding ourselves as the line between human and machine starts to blur. I know it sounds like science fiction, but we won’t be able to say that much longer. Brain-computer interfaces are already being designed for use in everything from gaming to medicine, and there is novelty technology that already lets you move stuff with your mind. It’s actually pretty cool, but I would be lying if it didn’t also creep me out just a little bit. I’m excited about all of the new things we’ll be able to do with this technology, but every innovation has tradeoffs, and the biggest one is usually privacy. It’s weird enough that Facebook knows what I’ve been doing on other websites (and sometimes even seems to know what I’ve been talking about, out loud near my phone). Some of the medical interventions that use brain-computer interfaces go way deeper than that, and bring up a lot of ethical questions. (They also make me sort of wish I had Professor Snape around to teach me Occlumency…)

Thankfully, people who understand this a whole lot better than I do are researching how to make sure ethics evolve along with this kind of technology. And that’s where empathy ties into all of this. I often talk in this newsletter about how we experience empathy ourselves – how it works in our brains, and how different kinds of technology affect how we experience it. But I think we need to start paying more attention to whether the people creating the world around us – and the world of our future – are empathizing with us. Especially when it comes to medical technology. At the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland, at least, this is on researchers’ minds. Professor John Donoghue, director of the center, recently published a report saying, in part, the following:

“Although we still don’t fully understand how the brain works, we are moving closer to being able to reliably decode certain brain signals. We shouldn’t be complacent about what this could mean for society.”

His biggest concern: “brainjacking.” Especially when it comes to semi-autonomous robots used to help restore movement or communication to people who have been paralyzed. Those kinds of machines don’t seem like the most obvious target for hackers, Donoghue admits, but what if the patient is a politician, for example? Donoghue and his colleagues called for new ethical guidelines for people working on semi-autonomous robots, especially in the medical field, which hopes to use them for all kinds of neurological therapies.

There’s also the issue of data privacy, just like we already contend with when we use things like Facebook, and even Fitbit. Except if you’re using a robot to help you regain memories of your life and family, the questions and data that robot has will likely be a lot more sensitive than your vacation photos and step counts. Donoghue recommends data encryption and network security guidelines like those already used in clinical studies.


Credit: Wyss Center

And what if a semi-autonomous robot that’s supposed to be helping you, ends up hurting you? Who’s at fault? These questions are still new (though we’ve been watching movies about them for years) but the time we have to figure it out may be shorter than we think.

“We don’t want to overstate the risks nor build false hope for those who could benefit from neurotechnology,” Donoghue said. “Our aim is to ensure that appropriate legislation keeps pace with this rapidly progressing field.”

I’ve spoken to a lot of people who feel like the same care was not taken when it came to the “rapidly progressing field” of internet communication, especially social media. How might things have been different if Twitter’s architects had thought harder about the harmful and dangerous ways people might use their tool, and what it might feel like to be on the receiving end of that harm and danger? Neuroprosthetics like so called brain training tools are obviously a lot more intimate than Twitter, but that means the same concept should apply even more strictly. At least we know the Swiss are working on it.

If you’d like to read Donoghue’s article on this topic: Help, hope, and hype: Ethical dimensions of neuroprosthetics, you can do so here.