Instagram, Nextdoor, and “Be Nice” Nudges

One of the first pieces of empathy-building tech* I wrote about was an algorithm built to recognize when comments on a newspaper story went off the rails. It was a tough story to place because it was hard to understand and even harder to explain. (I’m forever grateful for good editors!) The gist was that a group of researchers wanted to see if they could cultivate an environment in the comment section of a controversial story that would facilitate good, productive conversation. Their work eventually turned into Faciloscope, a tool aimed at detecting trolling behaviors and mediating them.

Like many research projects, it’s kind of hard to tell what happened after the initial buzz – grants change, people move, tech evolves, etc. All’s been pretty quiet on the automated comment section management front for a while, but over the past few months that’s begun to change. Now we can see similar technology popping up in the apps we use every day.

randalyn-hill-Zl2yVDTGByY-unsplash.jpg
Photo by Randalyn Hill on Unsplash

Earlier this year, Head of Instagram Adam Mosseri announced that the app would soon have new features to help prevent bullying. The official plan was released yesterday, and it boils down to one new function: Restrict. According to Instagram, “Restrict is designed to empower you to quietly protect your account while still keeping an eye on a bully.” It works letting you approve Restricted people’s comments on your posts before they appear – and you can decide to delete or ignore them without even reading them too, if you want. You won’t get notifications for these comments, so it’s unclear to me how you’d know they happened unless you went looking for them, which hopefully you aren’t doing, but let’s be honest… we all do that

Anyway, what about direct messages? DMs from Restricted people will turn into “message requests,” like what already happens when someone you don’t know sends you a message. The sender won’t be able to see if you’ve read their message.

Inexplicably, Instagram also used this announcement to tell us about its new “Create Don’t Hate” sticker, as if that’s an anti-bullying feature… when it’s literally just a sticker you can put on your story. So… okay, cool?

I wouldn’t exactly call this empathy-building tech, but I would hear an argument that it’s an example of tech showing empathy for its users, with the usual caveat that this is probably way too little, way too late. It seems like a good thing, don’t get me wrong. It just should have been a thing much sooner.

This won’t have much use for me, because I’ve already unfollowed or blocked the people whose comments I’d least like to see. What I’d really like is a pop-up kind of like what Netflix has, that alerts me after I’ve been scrolling for more than 15 minutes… “Maybe it’s time for a break?” Or the ability to customize a pop up for when I visit one of my frenemies’ accounts… “Remember why you unfollowed this person??” But I could see it being useful for a teenager who gets bombarded with bullying messages. It’s a start, at least.

Nextdoor, essentially a neighborhood-specific Facebook/Reddit hybrid, did recently release prompts that might encourage empathyLike all social media platforms, Nextdoor has gained a reputation for fostering nastiness, NIMBYism, and even racism. So it launched a “kindness reminder,” which pops up to let you know if your reply to someone’s comment “looks similar to content that’s been reported in the past” and gives you a chance to re-read the community guidelines and rephrase your comment.

Nextdoor says the feature is meant to “encourage positivity across the Nextdoor platform,” but they also seem to suggest that it will make neighborhoods themselves more kind. They claim that in early tests of the feature, 1 in 5 people chose to edit their comments, “resulting in 2-% fewer negative comments” (though it’s not clear to me exactly how they measure negativity). They also claim the Kindness Reminder gets prompted less over time in areas where it’s been tested.

This, like Instagram’s Restricted feature, is an example of a social media company responding to many, many, many complaints of negative behavior and impact. But in Nextdoor’s case, there at least seems to be more transparency. In their post explaining the new feature, Nextdoor says the company built an advisory panel of experts, including Dr. Jennifer Eberhardt, a social scientist who wrote a book about racial bias. There was apparently a session with some of Eberhardt’s students in which Nextdoor employees (executives? unclear) shared their experiences with bias in their own lives as well as on the platform. So, that’s something. If nothing else, I could imagine the Kindness Reminder at least making me stop for a second before dashing off a snarky comment, something that doesn’t happen as much as it used to but is still an unfortunate possibility for me…

One big question about all of this, of course, is why can’t we just use our internal “kindness reminders”? Most of us do have them, after all. But it’s hard when, as Eberhardt notes in the Nextdoor press release: “the problems that we have out in the world and in society make their way online where you’re encouraged to respond quickly and without thinking.” We can create as many empathy-focused tools as we want, but as long as that’s the case, there will always be more work to do.

 

*When I first started writing about this stuff, the concept seemed new to a lot of people and it seemed obvious that the words “ostensibly” or “supposedly” or “hopefully” were implied. Today, not so much, for good reason: a lot of tech that’s advertised as empathetic seems more invasive or manipulative. So, I hope you will trust me when I say I understand that context, and I think about the phrase “empathy-building tech” as having an asterisk most of the time.

Woulda, shoulda, coulda

Twitter co-founder Ev Williams posted a thread yesterday. Not super surprising, since he’s one of the fathers of Twitter, but as he explained in said thread, he doesn’t post his thoughts there much. He sticks to links, because he, “[doesn’t] enjoy debating with strangers in a public setting” and he “always preferred to think of [Twitter] as an information network, rather than a social network.”

That definitely elicited some eye-rolls, but this was the tweet – in a long thread about how he wants reporters to stop asking him how to fix Twitter’s abuse problems – that really caught my eye…

That is… exactly the problem! It’s both reassuring to see this apparent self-awareness, and frustrating how late it’s come, and how defensive he still is…

Maybe he feels like he can’t say for sure whether being more aware of how people “not like him” were being treated or having a more diverse leadership team or board would have led the company to tackle abuse sooner…. but those of us who are “not like him” are pretty confident it would have. Or at least it could have. It should have.

This is what I mean when I talk about a lack of empathy in tech. I don’t know Ev Williams or any of his co-founders; I don’t know many people who have founded anything at all. And I understand that founders and developers are people deserving of empathy too. As I read Williams’s thread, I tried to put myself in his shoes, even as I resisted accepting much of what he was saying. I get that “trying to make the damn thing work” must have been a monumental task. But as I talk about here a lot – there’s empathy, and then there’s sympathy. And as Dylan Marron likes to say, empathy is not endorsement. I can imagine it, but I don’t get it. And it’s little solace to the hundreds of people who are harassed and abused via Twitter every day to hear it confirmed that their safety wasn’t a priority, whatever the reason.

They know this – we know this. The question is, what now? Williams, for his part, brushes off this question. It’s not his problem anymore, he seems to say, and he doesn’t know how to fix it, but if you have any “constructive ideas,” you should let Twitter know (or write about them on Medium, Williams’s other tech baby…)

The toxicity that Williams says he’s trying to avoid – that he says his famous friend is very upset by, that he seems almost ready to acknowledge is doing real damage to many, many other people who use Twitter – was part of what inspired me to write The Future of Feeling. I wanted to know, if it’s this bad right now, how much worse could it get? Is anyone anyone trying to stop this train?

I talked to a lot of people in my reporting for the book, and over and over again I heard the same idea echoed: empathy has to be part of the fabric of any new technology. It has to be present in the foundation. It has to be a core piece of the mission. Creating a thing for the sake of creating the thing isn’t good enough anymore. (Frankly, it never was.) The thing you create is very likely to take on a life of its own. You need to give it some soul, too.

Williams ended his thread with a tweet that actually resonated with me. It’s something I’ve found to be absolutely true:

People made this mess. People will have to clean it up. If Williams doesn’t want to, or know how to, I know a lot of other folks who are getting their hands dirty giving it a try.