Instagram, Nextdoor, and “Be Nice” Nudges

One of the first pieces of empathy-building tech* I wrote about was an algorithm built to recognize when comments on a newspaper story went off the rails. It was a tough story to place because it was hard to understand and even harder to explain. (I’m forever grateful for good editors!) The gist was that a group of researchers wanted to see if they could cultivate an environment in the comment section of a controversial story that would facilitate good, productive conversation. Their work eventually turned into Faciloscope, a tool aimed at detecting trolling behaviors and mediating them.

Like many research projects, it’s kind of hard to tell what happened after the initial buzz – grants change, people move, tech evolves, etc. All’s been pretty quiet on the automated comment section management front for a while, but over the past few months that’s begun to change. Now we can see similar technology popping up in the apps we use every day.

randalyn-hill-Zl2yVDTGByY-unsplash.jpg
Photo by Randalyn Hill on Unsplash

Earlier this year, Head of Instagram Adam Mosseri announced that the app would soon have new features to help prevent bullying. The official plan was released yesterday, and it boils down to one new function: Restrict. According to Instagram, “Restrict is designed to empower you to quietly protect your account while still keeping an eye on a bully.” It works letting you approve Restricted people’s comments on your posts before they appear – and you can decide to delete or ignore them without even reading them too, if you want. You won’t get notifications for these comments, so it’s unclear to me how you’d know they happened unless you went looking for them, which hopefully you aren’t doing, but let’s be honest… we all do that

Anyway, what about direct messages? DMs from Restricted people will turn into “message requests,” like what already happens when someone you don’t know sends you a message. The sender won’t be able to see if you’ve read their message.

Inexplicably, Instagram also used this announcement to tell us about its new “Create Don’t Hate” sticker, as if that’s an anti-bullying feature… when it’s literally just a sticker you can put on your story. So… okay, cool?

I wouldn’t exactly call this empathy-building tech, but I would hear an argument that it’s an example of tech showing empathy for its users, with the usual caveat that this is probably way too little, way too late. It seems like a good thing, don’t get me wrong. It just should have been a thing much sooner.

This won’t have much use for me, because I’ve already unfollowed or blocked the people whose comments I’d least like to see. What I’d really like is a pop-up kind of like what Netflix has, that alerts me after I’ve been scrolling for more than 15 minutes… “Maybe it’s time for a break?” Or the ability to customize a pop up for when I visit one of my frenemies’ accounts… “Remember why you unfollowed this person??” But I could see it being useful for a teenager who gets bombarded with bullying messages. It’s a start, at least.

Nextdoor, essentially a neighborhood-specific Facebook/Reddit hybrid, did recently release prompts that might encourage empathyLike all social media platforms, Nextdoor has gained a reputation for fostering nastiness, NIMBYism, and even racism. So it launched a “kindness reminder,” which pops up to let you know if your reply to someone’s comment “looks similar to content that’s been reported in the past” and gives you a chance to re-read the community guidelines and rephrase your comment.

Nextdoor says the feature is meant to “encourage positivity across the Nextdoor platform,” but they also seem to suggest that it will make neighborhoods themselves more kind. They claim that in early tests of the feature, 1 in 5 people chose to edit their comments, “resulting in 2-% fewer negative comments” (though it’s not clear to me exactly how they measure negativity). They also claim the Kindness Reminder gets prompted less over time in areas where it’s been tested.

This, like Instagram’s Restricted feature, is an example of a social media company responding to many, many, many complaints of negative behavior and impact. But in Nextdoor’s case, there at least seems to be more transparency. In their post explaining the new feature, Nextdoor says the company built an advisory panel of experts, including Dr. Jennifer Eberhardt, a social scientist who wrote a book about racial bias. There was apparently a session with some of Eberhardt’s students in which Nextdoor employees (executives? unclear) shared their experiences with bias in their own lives as well as on the platform. So, that’s something. If nothing else, I could imagine the Kindness Reminder at least making me stop for a second before dashing off a snarky comment, something that doesn’t happen as much as it used to but is still an unfortunate possibility for me…

One big question about all of this, of course, is why can’t we just use our internal “kindness reminders”? Most of us do have them, after all. But it’s hard when, as Eberhardt notes in the Nextdoor press release: “the problems that we have out in the world and in society make their way online where you’re encouraged to respond quickly and without thinking.” We can create as many empathy-focused tools as we want, but as long as that’s the case, there will always be more work to do.

 

*When I first started writing about this stuff, the concept seemed new to a lot of people and it seemed obvious that the words “ostensibly” or “supposedly” or “hopefully” were implied. Today, not so much, for good reason: a lot of tech that’s advertised as empathetic seems more invasive or manipulative. So, I hope you will trust me when I say I understand that context, and I think about the phrase “empathy-building tech” as having an asterisk most of the time.

Why empathy & tech, and why now?

In many ways, this feels like a really weird time to be writing a book about empathy.

Especially empathy & technology, the latter of which is broad but also broadly applies only to people with a certain kind of privilege. Access to broadband, to schools with tech resources, to hardware like laptops or phones or ipads. Access to disposable income that can be spent on something like a VR headset (which I admittedly haven’t been able to justify yet myself).

And with all that’s going on in the world, I also think a lot about the targets of empathy crusades.

Who are you trying to build empathy for? Who are you trying to build it in? Why? What are the potential inherent biases there? Who are you othering, intentionally or not? I’m grateful to Sundance’s Kamal Sinclair and documentarian Michele Stephenson, among others, for helping me think through this with their work.

I want this book to further this conversation, and start new ones. But on a lot of recent days, with the weight of what’s happening in the wider world, and the many small heartbreaks in my own circle, I wonder if it matters.

My recent trip to New York to attend part of the Games for Change summit helped me think through a lot of this and gave me some validation that the stories I’m trying to tell do matter, even if it’s not always clear exactly how and why. The thing I keep coming back to: empathy is not endorsement, and empathy is not enough.

Here I am experiencing 1,000 Cut Journey, an immersive VR experience depicting the life of a young black man in America that moved me almost to tears.

XR4C

I spend a lot of time (always have) wondering if I should be doing something else. But I’m dedicated to finding a way to productively add to this conversation about the future of empathy. Maybe I’ll fail, but at a time when nothing feels like enough, this is what I have to offer, and I have to try.

Driverless empathy

Algorithms and big data affect our lives in so many ways we don’t even see. These things that we tend to believe are there to make our lives easier and more fair also do a lot of damage, from weeding out job applicants based on unfair parameters that ignore context to targeting advertisements based on racial stereotypes. A couple of weeks ago I got to see Cathy O’Neil speak on a panel about her book Weapons of Math Destruction, which is all about this phenomenon. Reading her book, I kept thinking about whether a more explicit focus on empathy on the part of the engineers behind these algorithms might make a difference.

The futurist and game creator Jane McGonigal suggested something similar to me when I spoke to her for this story earlier this year. We talked about Twitter, and how some future-thinking and future-empathizing might have helped avoid some of the nasty problems the platform is facing (and facilitating) right now. But pretty soon Twitter may be the least of our worries. Automation is, by many accounts, the next big, disruptive force, and our problems with algorithms and big data are only going to bet bigger as this force expands. One of the most urgent areas of automation that could use an empathy injection? Self-driving cars.

img_533748

I’ll be honest – until very recently I didn’t give too much thought to self-driving cars as part of this empathy and tech revolution that’s always on my mind. I thought of them as a gadget that may or may not actually be available at scale over the next decade, and that I may or may not ever come in contact with (especially while I live in New York City and don’t drive). But when I listened to the recent Radiolab episode “Driverless Dilemma,” I realized I’d been forgetting that even though humans might not be driving these cars, humans are deeply involved in the creation and maintenance of the tech that controls them. And the decisions those humans make could have life and death consequences.

The “Driverless Dilemma” conversation is sandwiched around an old Radiolab episode about the “Trolley Problem,” which asks people to consider whether they’d kill one person to save five in several different scenarios. You can probably imagine some version of this while driving: suddenly there are a bunch of pedestrians in front of you that you’re going to hit unless you swerve, but if you swerve you’ll hit one pedestrian, or possibly kill yourself. As driverless technology becomes more common, cars will be making these split-second decisions. Except it’s not really the cars making the decisions, it’s people making them, probably ahead of time, based on a whole bunch of factors that we can only begin to guess at right now. The Radiolab episode is really thought-provoking and I highly recommend listening to it. But one word that didn’t come up that I think could play a major role in answering these questions going forward is, of course, empathy.

When I talked with Jane McGonigal about Twitter, we discussed what the engineers could have done to put themselves in the shoes of people who might either use their platform for harassment or be harassed by trolls. Perhaps they would then have taken measures to prevent some of the abuse that happens there. One reason that may not have happened is that those engineers didn’t fit into either of those categories, so it didn’t occur to them to imagine those scenarios. Some intentional empathy, like what design firms have been doing for decades (“imagine yourself as the user of this product”) could have gone a long way. This may also be the key when it comes to driverless cars. Except the engineers behind cars’ algorithms will have to consider what it’s like to be the “driver” as well as other actual drivers on the road, cyclists, pedestrians, and any number of others. And they’ll have to imagine thousands of different scenarios. An algorithm that tells the car to swerve and kill the driver to avoid killing five pedestrians won’t cut it. What if there’s also a dog somewhere in the equation? What if it’s raining? What if the pedestrians aren’t in a crosswalk? What if all of the pedestrians are children? What if the “driver” is pregnant? Car manufacturers say these are all bits of data that their driverless cars will eventually be able to gather. But what will they do with them? Can you teach a car context? Can you inject its algorithm with empathy?

ebola in America

Yesterday, the news broke that the U.S. has its first case of Ebola. Some news organizations seemed more than a little giddy about this, to be honest. Certain cable anchors seemed like they had been waiting on pins and needles for this to happen and they could barely contain their excitement… And I’m not linking to any stories about it because every one I have seen says the same thing: a patient traveled from West Africa to the U.S., started showing symptoms, went to the hospital, is now isolated, certain people who came into contact with the patient are being monitored.

It’s what they don’t say that’s frustrating.

Here’s the thing: everything you think you know about Ebola is probably wrong, because, as with many huge news stories, it’s faster, easier, and more clickable to say “EBOLA IS TERRIFYING AND COULD POSSIBLY END THE WORLD” than it is to get into the nuance.

Sometimes this is just annoying. In cases of health epidemics, it’s also extremely irresponsible. This omission of information because of “time constraints” (or more likely a desire for clicks and views) has led millions of Americans to believe that Ebola is something it’s not. If you simply watch cable or network news or read most of the mainstream print media you will come away with more questions than answers: wait, what exactly is it? How is it spread? What are the symptoms? How long until we all bleed to death?

According to a recent Harvard poll, 39 percent of Americans are worried about a large-scale outbreak in the U.S. and 26 percent are worried they or a family member might get Ebola sometime in the next year. This, according to Harvard, the NIH, the Mayo Clinic and many many journalists who have actually been doing the hard work of health reporting for decades, is absurd. But it’s not that Americans are stupid, it’s that they are consuming media that does not give them all the information.

No, Ebola is not something to take lightly. But blowing it out of proportion in order to fit our 24-hours news cycle is the other extreme, and that’s not helpful either, even if it is lucrative. So, as a bit of an antidote, here are a couple of links to articles that actually try to teach you something, instead of just freaking you out. TL;DR: Ebola does not spread during incubation, only when there are symptoms, and our health care system is light years ahead of what’s available in West Africa, which is a reason for us to calm down about our own safety and maybe direct some of our concern over there. Let me know if you come across other good articles on this that you’d like to share!

The Guardian – No, Ebola in Dallas does not mean you and everyone else in the US is going to get it, too

Forbes – Why We Should Be Optimistic About The First U.S. Ebola Diagnosis

Covering health, thoroughly

I want to direct attention today to a post on the Association of Health Care Journalists blog Covering Health, written by Liz Seegert. It’s an incredibly important reminder of why journalists should never take health claims at face value, even if they seem to come from a reputable source.

Seegert received a press release in her email inbox proclaiming that smokers have a 45% higher risk for Alzheimer’s than non-smokers. On closer inspection, she found that the World Health Organization report behind the press release was actually just a collection of old research, and the research itself did not appear to back up all of the claims made in the report.

This is important because, thanks to this press release, this possibly unsupported claim made headline news, and the WHO continued to share potentially misleading information. Seegert tracked down a WHO representative and tried to clear things up, but the questionable report was still up on the organization’s website as of Thursday.

It’s just another example of why it’s so important to learn how to read the documents we’re writing about.

Yes, you should still get a regular pap smear.

Today was another big day for “that medical procedure you’ve been having? It’s useless!” stories. This time it was women’s pelvic exams, although much of the reporting seemed to lump all of those things that happen at the gynecologist’s office into one big “in the stirrups” event, and claim that it was all a waste of time.

The Annals of Internal Medicine released a study recommending against “performing screening pelvic examination” in patients without symptoms that would suggest they need their pelvis examined. 

But, as writer and registered nurse Kelli Dunham points out at the New Republic, that doesn’t actually count as a condemnation of the whole event. The pap smear that is usually done in the middle of a pelvic exam still gets the green light from the American College of Physicians and the American Cancer Society.

I highly recommend heading over to the New Republic to read Dunham’s piece. Not only does she delve into just what the study said and how those covering it failed, in many cases, to capture this key detail, she also shares her personal experience as a registered nurse working with patients who already experience so much stigma and anxiety connected with getting examined at all.

Click-bait headlines and mixed messages do nothing to alleviate this stigma and anxiety, let alone provide the clear message that health journalism should be striving to deliver. We synthesize the information in medical studies because it’s already confusing or inaccessible to the average reader; inattention to important details like the difference between a pelvic exam and a pap smear, especially with a study that gives such a clear recommendation, can be both sloppy and dangerous.

Growing Pains

This week, two of the subjects I follow most closely – sexual assault activism and critiques of the media – converged to create a tense (and intense) conversation about how the latter should approach the former, when Christine Fox (@steenfox) asked sexual assault victims to tweet what they were wearing when they were assaulted. A writer at BuzzFeed who has covered sexual assault extensively for the site put together a post using some of the tweets, and what ensued was what I hope will be the tumultuous beginning to a more nuanced conversation about journalism ethics regarding the use of comments on social media. 

Many responses to the situation focused on the point that Twitter is public, so those who participated in the “event” were not entitled to the privacy they later claimed.

As a feminist and a journalist, this has led to a lot of self-reflection for me over the last couple of days. So I’m going to mostly defer to this great piece by Kat Stoeffel for NY Magazine:

[The BuzzFeed writer] was under no obligation to reach out to the people who participated in Fox’s conversation under public Twitter handles, some of whom were righteously proud to have been handed the BuzzFeed microphone. Still, none of that inoculates Testa or BuzzFeed or other purveyors of listicles from the critiques at hand: Posts like this amount to selling a recording of other people’s group therapy while sending a fire hose of potentially unfriendly attention in the general direction of its participants.

Stoeffel says this may represent an internet “growing pain.” I would argue it also represents a growing pain in communications between journalists and their readers and subjects as those communications become easier and more frequent via social media.

We can preach about the laws and ethics we learn about in J School until we turn blue; that won’t change the fact that someone felt victimized, and that approach can backfire. I know the importance of not letting your story get away from you or be controlled by a source, but I also know the importance of doing justice to the person who lived the story. And I think that means really telling a story, valuing context over speed, brevity or clicks. It also means that, as we become more immediately accountable for our work, “face-to-face” with our sources and readers online, we may need to find new ways to explain how and why we do what we do.