Instagram, Nextdoor, and “Be Nice” Nudges

One of the first pieces of empathy-building tech* I wrote about was an algorithm built to recognize when comments on a newspaper story went off the rails. It was a tough story to place because it was hard to understand and even harder to explain. (I’m forever grateful for good editors!) The gist was that a group of researchers wanted to see if they could cultivate an environment in the comment section of a controversial story that would facilitate good, productive conversation. Their work eventually turned into Faciloscope, a tool aimed at detecting trolling behaviors and mediating them.

Like many research projects, it’s kind of hard to tell what happened after the initial buzz – grants change, people move, tech evolves, etc. All’s been pretty quiet on the automated comment section management front for a while, but over the past few months that’s begun to change. Now we can see similar technology popping up in the apps we use every day.

randalyn-hill-Zl2yVDTGByY-unsplash.jpg
Photo by Randalyn Hill on Unsplash

Earlier this year, Head of Instagram Adam Mosseri announced that the app would soon have new features to help prevent bullying. The official plan was released yesterday, and it boils down to one new function: Restrict. According to Instagram, “Restrict is designed to empower you to quietly protect your account while still keeping an eye on a bully.” It works letting you approve Restricted people’s comments on your posts before they appear – and you can decide to delete or ignore them without even reading them too, if you want. You won’t get notifications for these comments, so it’s unclear to me how you’d know they happened unless you went looking for them, which hopefully you aren’t doing, but let’s be honest… we all do that

Anyway, what about direct messages? DMs from Restricted people will turn into “message requests,” like what already happens when someone you don’t know sends you a message. The sender won’t be able to see if you’ve read their message.

Inexplicably, Instagram also used this announcement to tell us about its new “Create Don’t Hate” sticker, as if that’s an anti-bullying feature… when it’s literally just a sticker you can put on your story. So… okay, cool?

I wouldn’t exactly call this empathy-building tech, but I would hear an argument that it’s an example of tech showing empathy for its users, with the usual caveat that this is probably way too little, way too late. It seems like a good thing, don’t get me wrong. It just should have been a thing much sooner.

This won’t have much use for me, because I’ve already unfollowed or blocked the people whose comments I’d least like to see. What I’d really like is a pop-up kind of like what Netflix has, that alerts me after I’ve been scrolling for more than 15 minutes… “Maybe it’s time for a break?” Or the ability to customize a pop up for when I visit one of my frenemies’ accounts… “Remember why you unfollowed this person??” But I could see it being useful for a teenager who gets bombarded with bullying messages. It’s a start, at least.

Nextdoor, essentially a neighborhood-specific Facebook/Reddit hybrid, did recently release prompts that might encourage empathyLike all social media platforms, Nextdoor has gained a reputation for fostering nastiness, NIMBYism, and even racism. So it launched a “kindness reminder,” which pops up to let you know if your reply to someone’s comment “looks similar to content that’s been reported in the past” and gives you a chance to re-read the community guidelines and rephrase your comment.

Nextdoor says the feature is meant to “encourage positivity across the Nextdoor platform,” but they also seem to suggest that it will make neighborhoods themselves more kind. They claim that in early tests of the feature, 1 in 5 people chose to edit their comments, “resulting in 2-% fewer negative comments” (though it’s not clear to me exactly how they measure negativity). They also claim the Kindness Reminder gets prompted less over time in areas where it’s been tested.

This, like Instagram’s Restricted feature, is an example of a social media company responding to many, many, many complaints of negative behavior and impact. But in Nextdoor’s case, there at least seems to be more transparency. In their post explaining the new feature, Nextdoor says the company built an advisory panel of experts, including Dr. Jennifer Eberhardt, a social scientist who wrote a book about racial bias. There was apparently a session with some of Eberhardt’s students in which Nextdoor employees (executives? unclear) shared their experiences with bias in their own lives as well as on the platform. So, that’s something. If nothing else, I could imagine the Kindness Reminder at least making me stop for a second before dashing off a snarky comment, something that doesn’t happen as much as it used to but is still an unfortunate possibility for me…

One big question about all of this, of course, is why can’t we just use our internal “kindness reminders”? Most of us do have them, after all. But it’s hard when, as Eberhardt notes in the Nextdoor press release: “the problems that we have out in the world and in society make their way online where you’re encouraged to respond quickly and without thinking.” We can create as many empathy-focused tools as we want, but as long as that’s the case, there will always be more work to do.

 

*When I first started writing about this stuff, the concept seemed new to a lot of people and it seemed obvious that the words “ostensibly” or “supposedly” or “hopefully” were implied. Today, not so much, for good reason: a lot of tech that’s advertised as empathetic seems more invasive or manipulative. So, I hope you will trust me when I say I understand that context, and I think about the phrase “empathy-building tech” as having an asterisk most of the time.

Woulda, shoulda, coulda

Twitter co-founder Ev Williams posted a thread yesterday. Not super surprising, since he’s one of the fathers of Twitter, but as he explained in said thread, he doesn’t post his thoughts there much. He sticks to links, because he, “[doesn’t] enjoy debating with strangers in a public setting” and he “always preferred to think of [Twitter] as an information network, rather than a social network.”

That definitely elicited some eye-rolls, but this was the tweet – in a long thread about how he wants reporters to stop asking him how to fix Twitter’s abuse problems – that really caught my eye…

That is… exactly the problem! It’s both reassuring to see this apparent self-awareness, and frustrating how late it’s come, and how defensive he still is…

Maybe he feels like he can’t say for sure whether being more aware of how people “not like him” were being treated or having a more diverse leadership team or board would have led the company to tackle abuse sooner…. but those of us who are “not like him” are pretty confident it would have. Or at least it could have. It should have.

This is what I mean when I talk about a lack of empathy in tech. I don’t know Ev Williams or any of his co-founders; I don’t know many people who have founded anything at all. And I understand that founders and developers are people deserving of empathy too. As I read Williams’s thread, I tried to put myself in his shoes, even as I resisted accepting much of what he was saying. I get that “trying to make the damn thing work” must have been a monumental task. But as I talk about here a lot – there’s empathy, and then there’s sympathy. And as Dylan Marron likes to say, empathy is not endorsement. I can imagine it, but I don’t get it. And it’s little solace to the hundreds of people who are harassed and abused via Twitter every day to hear it confirmed that their safety wasn’t a priority, whatever the reason.

They know this – we know this. The question is, what now? Williams, for his part, brushes off this question. It’s not his problem anymore, he seems to say, and he doesn’t know how to fix it, but if you have any “constructive ideas,” you should let Twitter know (or write about them on Medium, Williams’s other tech baby…)

The toxicity that Williams says he’s trying to avoid – that he says his famous friend is very upset by, that he seems almost ready to acknowledge is doing real damage to many, many other people who use Twitter – was part of what inspired me to write The Future of Feeling. I wanted to know, if it’s this bad right now, how much worse could it get? Is anyone anyone trying to stop this train?

I talked to a lot of people in my reporting for the book, and over and over again I heard the same idea echoed: empathy has to be part of the fabric of any new technology. It has to be present in the foundation. It has to be a core piece of the mission. Creating a thing for the sake of creating the thing isn’t good enough anymore. (Frankly, it never was.) The thing you create is very likely to take on a life of its own. You need to give it some soul, too.

Williams ended his thread with a tweet that actually resonated with me. It’s something I’ve found to be absolutely true:

People made this mess. People will have to clean it up. If Williams doesn’t want to, or know how to, I know a lot of other folks who are getting their hands dirty giving it a try.

Droning on

Hello! Good morning. Let’s talk about drones.

Earlier this year, not long after Christmas, my husband and I went with one of our best friends to a historic village in North Carolina. We hadn’t been there since we were kids and wanted to experience it as adults. (See: walking into a building labeled “tavern” and walking right back out, dejected that there were no actual beers to be had.)

About halfway through the day, we exited an old building into a side yard just in time to see a drone taking off. The guy manning it was just a few feet away. He launched it off the ground and into the air, and I had two simultaneous thoughts:

“Wow, he’s gonna get some awesome photos of this place” and

“Wow, that sound is really, REALLY annoying, especially here!”

Such is the conundrum of life in 2019. There are so many tech things that make our lives cooler, easier, or safer while also being annoying, intrusive, or otherwise harmful. In the past I don’t think the developers of these technologies have done a great job anticipating future issues or needs. I do think that’s changing. But in the meantime, these are the kinds of things we have to deal with (and frankly, we probably will always have some degree of this issue).

I was recently reporting a piece about medical drones (coming soon) and came across this study that determined drones to be the most annoying of all vehicles. And that’s saying a lot, considering we also have motorcycles and 18-wheelers below them and airplanes above.

From a great New Scientist piece on the study:

“We didn’t go into this test thinking there would be this significant difference,” says study coauthor Andrew Christian of NASA’s Langley Research Center, Virginia. It is almost unfortunate the research has turned up this difference in annoyance levels, he adds, as its purpose was merely to prove that Langley’s acoustics research facilities could contribute to NASA’s wider efforts to study drones.

It’s a bummer all around, really. The study found that people (only 38 people, but still) experienced drone buzzes in a similar way they would experience a car that was twice as close as normal. These people didn’t even know what they were listening to, by the way, so we can’t just assume they’re anti-drone.

The piece I’ve been reporting is about the use of drones to save time and money moving blood samples and medical supplies. I wonder if people might find drones less annoying if they knew they were up there to help people? I hope that research is being done somewhere (I would not be surprised, as NASA and the FAA are doing a lot of work to study drone impact right now).

But even if we can get used to the sound of drones, or assuage ourselves with the thought that some of them are saving lives, we still have to look at them. It bugged me to see a black plastic mini-spaceship buzzing around a historic village, but it didn’t scare me or make me feel unsafe. Driving down the road and suddenly seeing a flock of them overhead, and not necessarily knowing their purpose…. would be a different story.

existentialist friday epilogue

I wrote my last post in a fog – a mixture of anxiety, sadness, nihilism and hope. Super dramatic for a Friday night, I know! And reading it today, I’m a little surprised by how intense those feelings were, and how clearly that intensity comes through.

Maybe I should be embarrassed – it was a very vulnerable piece of writing that might be better suited to a private journal. But even after reading it today, and considering that, I decided to hit publish because I do not believe I’m alone in those feelings or thought processes, and I think there are few things more important in this world right now than community with others in our feelings and thought processes.

Not necessarily validation, or reassurance, but community.

That’s what those people in those Christchurch mosques were engaging in last week when they were murdered. It’s what I did at my own church yesterday, feeling sad and uncertain and comforted by the knowledge that I was sitting among a lot of other people feeling the same things. We sang and meditated together, called out the elephants in the room (racism, hatred, violence, intolerance, ambiguity) and continued our ongoing conversation about how to live with and wrangle them. Lately I’ve come to view this as the most beautiful and important thing about being human – existing in community with one another. It sounds pretty and easy but it is one of the most complicated and difficult things I’ve ever done. I am grateful that I woke up today and get to keep doing it.

It’s also amazing to me how clear these ideas are after a couple of days of letting them simmer inside me. I avoided social media as much as possible this weekend. I exercised while listening to an audiobook, watched people of all ages fly kites in perfect weather, watched my husband make sourdough bread for the first time and beam with pride, ate delicious crab cakes and pizza, toasted to friends’ birthdays, read, sat in community with my friends at the Unitarian Universalist fellowship, drank a lot of water, took a bath, and let my brain breathe a little.

On the other side of all of that, I feel like things might be OK. I wonder what I can do to bring this feeling with me into every day, not just Mondays after a social media detox, while also respecting and cultivating the community that exists right there on social media too. They are different kinds of communities, but they overlap in so many ways. This is more true for me now that I live outside the New York City bubble than ever before, so maybe that’s why it might seem like I’m grasping for something others have known all along. But again, something tells me these things I’m wrestling with are more common than we like to admit.

Do you have your tech accountability buddy yet? Maybe you can admit it to each other?

just a little tech existentialism on a friday night

Note: I wrote this on Friday night (3/15) but didn’t want to post right away, to avoid seeming to make the Christchurch tragedy about me. That is not my intention at all. Rather, my intent is to share some of what was going through my mind that day (and frankly, many days) in hopes that it resonates with others and contributes to a broader conversation.

 

Who/what do you turn to when you feel overwhelmed or exhausted or afraid? When you feel overrun by information and opinions, how do you protect yourself?

I realized today that I don’t really have an answer to those questions.

It’s been a really long work week, and I’ve been channeling my stress into two things that I’ve noticed have become crutches for me when I don’t want to sit with my feelings: Instagram and podcasts.

This morning, by the time I got to work at 8:30 I had already watched about half an hour of Instagram stories, which is how I found out about the Christchurch shootings. I had heard a bit more about the horror on the short morning news podcast Up First, which I usually listen to while I get ready for work. I had also scrolled through Twitter for a few minutes, taking in but not quite digesting takes from dozens of people about what had happened, takes that made me feel, for a few seconds each: sad, sick, disgusted, embarrassed, guilty, defensive, angry, and heartbroken.

In the car, I put on Pod Save America and absorbed about 15 minutes of dudes yelling about politics and reminding me how untenable our current political situation is.

By the time I got to work I was feeling pretty anxious, but that’s nothing new for me so I just accepted it. I read some news, looked at Twitter some more, watched some more Instagram stories. Then I put PSA back on so I could listen while I did some editing. It’s like muscle memory.  Do some work while listening to a podcast, check email, get stressed about something, reach for phone and flip over to Instagram, feel guilty for doing that, get back to work and podcast, remember the world is burning, head over to Twitter, see something horrific, go back to Instagram for comfort, fill head with more and more and more of other people’s stories, ideas, and priorities.

I started reading You Are Not A Gadget by Jaron Lanier earlier this week and I’m only on page 16, so I don’t 100% know where the book is going, but the tone is already, “this is not what we meant for you when we made the social web.” And I know that’s true, to an extent. I don’t think anyone imagined this in the beginning, though I’m certain some people predicted it 10 or so years ago and helped usher it in because it makes lots of money. But it also makes people crazy.

I feel crazy, and when I say that I don’t mean it in the mentally ill sense (although we already know I am that, in some ways) but I mean frazzled, unmoored, grasping. I feel tethered to something for comfort but that thing is what makes me need comfort in the first place. I’ve seen several others compare their relationships with their phones and social media to abusive partner relationships, and I don’t think that’s far off.

Today, when I was overwhelmed by the bloodshed and hatred and extremity of the world all around me, I “retreated” via social media and podcasts into even more of the same. At 9:34am I sent my husband this message:

“I feel so overwhelmed today. I just want to crawl under my desk and cry.”

“I’m so sorry you’re feeling that way,” he messaged back.

But I feel that way almost every day around that time, because I set myself up for it. I know this, and yet I keep doing it, because it feels mandatory for being an active citizen of this world.

I know I’m not the only one in this cycle, and I really don’t think it has to be this way. But one of the things we’re going to have to do to change it is to gather the courage to break out.

On the first page of You Are Not A Gadget, Lanier writes:

“I want to say: You have to be somebody before you can share yourself.”

Right now I get the sense that many of us feel that sharing ourselves is part of what makes us somebody. I’m reminded of this recent piece in The Atlantic about young kids coming to terms with their own online-ness. One 13-year-old said, of trying to find information about herself with a group of friends in fifth grade: “We thought it was so cool that we had pics of ourselves online…We would brag like, ‘I have this many pics of myself on the internet.’ You look yourself up, and it’s like, ‘Whoa, it’s you!’ We were all shocked when we realized we were out there. We were like, ‘Whoa, we’re real people.’”

I’m somewhat ashamed to admit that last part really resonated with me. I grew up online, and have been sharing things about myself there since high school, maybe earlier. Having an online presence, an online self, has felt natural to me for half my life. I’m also a writer, so it might feel more natural to me than most to share my thoughts with the world. But something has shifted over the past few years, and the way the internet – and especially social media – is tied to my identity scares me a little. I find myself wondering if I’m doing certain things because I want to do them, or because I want to share them. When something big happens, I sometimes find myself imagining how I’ll describe it on social media before I even realize what I’m doing. Like I said, tethered. 

Online is where the validation is, I guess, even when we have partners and spouses and families and friends. The silent, pretty, no-strings-attached validation so many of us millennials simultaneously crave (because it’s a normal thing for a human to crave) and cynically joke about not caring about, or not being able to attain. But a lot of us seem to be grabbing for that validation in place of actually dealing with things. And I get it – there is too much to deal with. Mass shootings, climate change, racism, income inequality, mental and physical health problems – it’s all too much. But now that we have been performing for each other online for 30ish years, I’m worried we’re starting to forget not just how to be around each other, but how to feel. As a kid, my identity was so wrapped up in feeling – I cried all the time, was so emotional it scared some of my teachers, and later on definitely scared off a few boyfriends. I don’t cry as much anymore, which is probably healthy, but I also don’t really feel anything stronger than hunger or anxiety for more than a minute at a time. As soon as it pops up – sadness, anger, hurt, shame, worry – there I go, reaching for my phone.

I think there are a lot of remedies to this. One would of course be to just go cold turkey, cut ourselves off from all social media and not look back, but that kills all the good along with the bad. And there is so much good.

Another idea: the people who make this stuff, these products designed to pull us back for more and more, triggering dopamine receptors like slot machines, could…you know…stop. They could pull back and be more mindful – more empathetic – about how their users experience their products. I’m far from the first to suggest this, but given the slowly growing exodus from platforms like Facebook (by both users and employees), it might be about time for them to listen.

Or maybe something more communal is more realistic. Maybe we can get the human connection and validation we crave by helping each other be kinder to our brains and gentler toward our emotions, while also keeping up with all the memes and Trump tweets. What if you had a tech accountability buddy who texted you once a day to ask about your internet activity and how it was making you feel – not to shame you, but to empathize, acknowledge, validate, and encourage you? There are apps that do this, and chat bots, but as much faith as I want to have in empathetic technology, I know they don’t really care. Maybe a friend does, or wants to. Maybe we can get to a healthier place – a place where we can demand better from those who design the tools we use, and figure out how to use them without becoming dependent on them, and get back to feeling the difficult feelings – together.

By the way, you can support Christchurch victims and families here.

Power drills vs. dental drills

At the beginning of this year I went to the dentist for the first time in… a while, and learned I had five cavities. Five! I brush my teeth – I even floss! – but somehow three of my old fillings had failed me and two new ones were needed. This wouldn’t have been that big of a deal except… and now you’re really going to judge me… I am afraid of Novocaine.

Now, let me say as clearly as I can: this is a 95% irrational fear. Novocaine is extremely safe and I trust my dentist to use it properly, and I am even fairly certain if I used it nothing bad would happen. But because I have an anxiety brain, this was my thought process upon learning I needed five fillings:

Shit, that’s going to be expensive and take a while. Also, crap, they’ll give me Novocaine, and that has the potential to cause heart palpitations, and I’ll probably already be having them because I’ll be nervous, and that could create a dangerous situation, oh shit shit how do I get around this?

Again, Novocaine is extremely safe. Irregular heart beat is a very rare potential side effect associated with many medications – it’s part of the generic list of allergic reactions a step above itchiness and swelling. But since I’ve dealt (rather poorly, I’ll admit) with heart palpitations caused by stress and anxiety for years, I am hyper-vigilant about avoiding situations that might cause them. So, how did I get around it? I opted out. I said no to the Novocaine and sucked it up. And yeah, it hurt. I spaced the procedure out into three visits to spread out both the cost and the pain. In the end, each procedure took less time than it would have with numbing, and I was able to eat and drink right afterward. Most of all, I survived (which of course I would have regardless). The dentists and hygienists kept calling me a badass and saying how well I handled the pain, but I wasn’t proud; I was honestly a little embarrassed, and exhausted, and sore.

As I waited in the chair for each procedure to start, I stared at a flat screen monitor. The first time it scrolled through pictures of cute kids and puppies (including a truly awesome slideshow of dogs that look like other things); on my second visit it was a silent presentation about my dentist’s trip to Haiti, complete with facts about the country; and on the third and final visit I was treated to calming videos of waves crashing on sand.

During each procedure, there was a moment or two when I thought I couldn’t handle any more – when the drill would hit a specific spot on the tooth that was just too close to a nerve. During those times, I had the old calming television standby to distract me from another monitor on the ceiling: HGTV. (I have seen this in at least one other dental office and several specialists’ offices – there’s just something about Chip and Joanna…) And I have to tell you, these things worked. In the moments I would have gritted my teeth at the pain (which was obviously impossible) I instead focused all of my energy and attention on the wall demo or sconce selection happening on the ceiling screen. And it worked, in the sense that avoiding a full-on panic attack or biting off my dentist’s fingers = “working.” Which… I’ll take it!

It’s not shiplap that helps with pain and anxiety in the dental chair – it’s that shift in energy and attention. And it still works on me even though I know this. And I actually found myself thinking, as I left the dental office for the last time (for a while, at least…I hope…) that I really wish more medical offices had this kind of programming. Not just HGTV, but slideshows and silent videos made with the explicit goal of helping patients calm down. Not just cheesy quotes about serenity, but soothing images that are scientifically correlated with lower blood pressure and cortisol. Imagine if more clinicians acknowledged that we might be anxious, and rather than ignoring that or explaining it away, just empathized with it and tried to set a calmer tone. This sort of thing is relatively common in dentistry and in pediatrics; imagine if our anxiety and potential medical trauma was taken more seriously even in cardiology, physical therapy, dermatology, and other offices! I think it’s something to work toward.

 

Is AOC right about AI?

Conservative Twitter is up in arms today over Rep. Alexandria Ocasio-Cortez saying at an MLK Day event that algorithms are biased. (Of course “bias” has been translated into “racism.”) The general response from the right has been, “What a dumb socialist! Algorithms are run by math. Math can’t be racist!” And from the tech experts on Twitter: “Well, actually….”

I have to put myself in the latter camp. Though I’m not exactly a tech expert, I’ve been researching the impact of technology like AI and algorithms on human well-being for a couple of years now, and the evidence is pretty clear: people have bias, people make algorithms, so algorithms have bias.

When I was a kid, my dad had this new-fangled job as a “computer programmer”. The most vivid and lasting evidence of this vocation was huge stacks of perforated printer paper and dozens upon dozens of floppy disks. But I also remember him saying this phrase enough times to get it stuck in my head: “garbage in, garbage out.” This phrase became popular in the early computer days because it was an easy way to explain what happened when flawed data was put into a machine – the machine spit flawed data out. This was true when my dad was doing…whatever he was doing… and when I was trying to change the look of my MySpace page with rudimentary HTML code. And it’s true with AI, too. (Which is a big reason we need the tech world to focus more on empathy. But I won’t go on that tangent today.)

When I was just starting work on my book, I read Cathy O’Neil’s Weapons of Math Destruction (read it.), which convinced me beyond any remaining doubt that we had a problem. Relying on algorithms to make decisions for us that have little to no oversight and are entirely susceptible to contamination by human bias – conscious or not – is not a liberal anxiety dream. It’s our current reality. It’s just that a lot of us – and I’ll be clear that here I mean a lot of us white and otherwise nonmarginalized people – don’t really notice.

Maybe you still think this is BS. Numbers are numbers, regardless of the intent/mistake/feeling/belief of the person entering them into a computer, you say. This is often hard to get your head around when you see all bias as intentional, I get that, I’ve been there. So let me give you some examples:

There are several studies showing that people with names that don’t “sound white” are often passed up for jobs in favor of more “white-sounding” names. It reportedly happens to women, too. A couple of years ago, Amazon noticed that the algorithm it had created to sift through resumes was biased against women. It had somehow “taught itself that male candidates were preferable.” Amazon tweaked the algorithm, but eventually gave up on it, claiming it might find other ways to skirt neutrality. The algorithm wasn’t doing that with a mind of its own, of course. Machine-learning algorithms, well, learn, but they have to have teachers, whether those teachers are people or gobs of data arranged by people (or by other bots that were programmed by people…). There’s always a person involved, is my point, and people are fallible. And biased. Even unconsciouslyEven IBM admits it. This is a really difficult problem that even the biggest tech companies haven’t yet figured out how to fix. This isn’t about saying “developers are racist/sexist/evil,” it’s about accounting for the fact that all people have biases, and even if we try to set them aside, they can show up in our work. Especially when those of us doing that work happen to be a pretty homogeneous group. One argument for more diversity in tech is that if the humans making the bots are more diverse, the bots will know how to recognize and value more than one kind of person. (Hey, maybe instead of trying to kill us the bots that take over the world will be super woke!)

Another example: In 2015, Google came under fire after a facial recognition app identified several black people as gorillas. There’s no nice way to say that. That’s what happened. The company apologized and tried to fix it, but the best it could do at the time was to remove “gorilla” as an option for the AI. So what happened? Google hasn’t been totally clear on the answer to this, but facial recognition AI works by learning to categorize lots and lots of photos. Technically someone could have trained it to label black people as gorillas, but perhaps more likely is that the folks training the AI in this case simply didn’t consider this potential unintended consequence of letting an imperfect facial recognition bot out into the world. (And, advocates argue, maybe more black folks on the developer team could have prevented this. Maybe.) Last year a spokesperson told Wired: “Image labeling technology is still early and unfortunately it’s nowhere near perfect.” At least Google Photos lets users to report mistakes, but for those who are still skeptical, note: that means even Google acknowledges mistakes are being – and will continue to be – made in this arena.

One last example, because it’s perhaps the most obvious and also maybe the most ridiculous: Microsoft’s Twitter bot, Tay. In 2016, this AI chatbot was unleashed on Twitter, ready to learn how to talk like a millennial and show off Microsoft’s algorithmic skills. But almost as soon as Tay encountered the actual people of Twitter – all of them, not just cutesy millennials speaking in Internet code but also unrepentant trolls and malignant racists – her limitations were put into stark relief. In less than a day, she became a caricature of violent, anti-semitic racist. Some of the tweets seemed to come out of nowhere, but some were thanks to a nifty feature in which people could say “repeat after me” to Tay and she would do just that. (Who ever would have thought that could backfire on Twitter?) Microsoft deleted Tay’s most offensive tweets and eventually made her account private. It was a wild day on the Internet, even for 2016, but it was quickly forgotten. The story bears repeating today, though, because clearly we are still working out the whole bot-human interaction thing.

To close, I’ll just leave you with AOC’s words at the MLK event. See if they still seem dramatic to you.

“Look at – IBM was creating facial recognition technology to target, to do crime profiling. We see over and over again, whether it’s FaceTime, they always have these racial inequities that get translated because algorithms are still made by human beings, and those algorithms are still pegged to those, to basic human assumptions. They’re just automated, and automated assumptions, it’s like if you don’t fix the bias then you’re automating the bias. And that gets even more dangerous.”

(This is the “crime profiling” thing she references, by the way. I’m not sure where the FaceTime thing comes from but I will update this post if/when I get some context on that.)

Update: Thanks to the PLUG newsletter (which I highly recommend) I just came across this fantastic video that does a wonderful job of explaining the issue of AI bias and diversity. It includes a pretty wild example, too. Check it out.