You’re probably already media literate – trust your instincts

This week is Media Literacy Week in the U.S., and that means lots of people will talk about how important it is to be able to tell the difference between facts and fake news, especially online.

It’s true. It’s really important. But it’s always been difficult, and it’s becoming harder and harder as we get more and more of our information via social media. This problem is becoming increasingly apparent as we inch closer to the 2020 presidential election. If you aren’t obsessively reading about this (in which case, I envy you) you might have missed that Mark Zuckerberg, head of Facebook, recently said his platform will not be taking action against political ads that contain lies.

In statements last week, he said he’s really concerned about the “erosion of truth,” but he just can’t let Facebook be the arbiter of right and wrong by taking down political ads that contain false statements. One of his primary arguments is that the FCC requires radio and television stations to give candidates equal time, but Zuckerberg also likes to claim Facebook is not a media company… But it’s this “we’re not the arbiter of truth” piece that feels most troubling to me.

It’s a very familiar argument, similar to what I’ve heard from individuals who’ve decided they don’t trust any mainstream media source: “We can’t trust one arbiter of truth, so we really can’t trust any, and we’ll never now what’s ‘true,’ so why bother worrying about it?” Usually I would get a message like this after gently suggesting to an acquaintance or distant family member that a link to InfoWars or NaturalNews or Prager U might be misleading.

Sometimes journalists get a little resentful about this stuff, which, as a journalist, I get. But I also get not wanting to be condescended to about what’s “true,” and I get that there are so many information sources out there, it can be truly impossible to sift through it all without spending a lot of time and energy. I also get that some people might have that time and energy, but choose to spend it finding things that confirm what they already believe – it’s a free country, so I won’t try to talk you out of it.

But… I think that deep down most people really do care about facts, and really don’t like being lied to. Yes, politics is dirty, and media can be too. But throwing the proverbial baby out with the bathwater in these two areas is dangerous. Media is meant to hold the powerful accountable. Facebook can’t decide if it’s a member of the media, or one of the powerful, or both. It might feel like we, lowly civilians, can’t figure that out for them or do anything about it, but what I want you to think about on this media literacy day is that we can.

Media literacy doesn’t have to imply you’re illiterate about the media, or that you need to take some kind of formal class or workshop to understand what’s going on. For most people – people who want to know what’s true but are just a little overwhelmed – it’s about trusting your instincts.

Does a headline seem too good or bad or crazy to be true? It probably is. You can check by looking at the URL, reading the story, and clicking on links within it.

Are you skeptical of the way something is being framed? That’s great insight. You can read articles by other publications about the same topic to round out your exposure to the story and see what makes sense to you.

You’re still going to suffer from confirmation bias – we all want to believe what we want to believe. But I think being intentional about this, recognizing when we’re maybe understanding something based more on our wishes than the facts in front of us, will make all the difference.

It’s true – existentially, it’s hard to know what’s objectively, 100%, no-doubt true. But that’s not what media literacy is about. It’s about knowing what happened, who did it, and maybe why. Sometimes answering those questions takes more than one tweet or article or even one year of reporting and reading. That’s okay – that’s how it’s always been. Getting comfortable with not knowing some things for sure, but being pretty confident you’re following along, is half the battle.

Resources:

  • Subscribe to The Flip Side, a newsletter that shows you how the right, left, and center are covering various big news items (especially political stuff). It doesn’t always make me feel like I know what’s true for certain, but it helps me understand better the way things are being framed and why.
  • Take this News Literacy Quiz. Fun fact – I didn’t pass the first time I took it myself!
  • Read these 8 ways to tell if a website is reliable.
  • Subscribe to the news sources you use most, and/or sign up for their newsletters so you get the information right in your inbox, rather than through the filter of your social media feed.

 

Instagram, Nextdoor, and “Be Nice” Nudges

One of the first pieces of empathy-building tech* I wrote about was an algorithm built to recognize when comments on a newspaper story went off the rails. It was a tough story to place because it was hard to understand and even harder to explain. (I’m forever grateful for good editors!) The gist was that a group of researchers wanted to see if they could cultivate an environment in the comment section of a controversial story that would facilitate good, productive conversation. Their work eventually turned into Faciloscope, a tool aimed at detecting trolling behaviors and mediating them.

Like many research projects, it’s kind of hard to tell what happened after the initial buzz – grants change, people move, tech evolves, etc. All’s been pretty quiet on the automated comment section management front for a while, but over the past few months that’s begun to change. Now we can see similar technology popping up in the apps we use every day.

randalyn-hill-Zl2yVDTGByY-unsplash.jpg
Photo by Randalyn Hill on Unsplash

Earlier this year, Head of Instagram Adam Mosseri announced that the app would soon have new features to help prevent bullying. The official plan was released yesterday, and it boils down to one new function: Restrict. According to Instagram, “Restrict is designed to empower you to quietly protect your account while still keeping an eye on a bully.” It works letting you approve Restricted people’s comments on your posts before they appear – and you can decide to delete or ignore them without even reading them too, if you want. You won’t get notifications for these comments, so it’s unclear to me how you’d know they happened unless you went looking for them, which hopefully you aren’t doing, but let’s be honest… we all do that

Anyway, what about direct messages? DMs from Restricted people will turn into “message requests,” like what already happens when someone you don’t know sends you a message. The sender won’t be able to see if you’ve read their message.

Inexplicably, Instagram also used this announcement to tell us about its new “Create Don’t Hate” sticker, as if that’s an anti-bullying feature… when it’s literally just a sticker you can put on your story. So… okay, cool?

I wouldn’t exactly call this empathy-building tech, but I would hear an argument that it’s an example of tech showing empathy for its users, with the usual caveat that this is probably way too little, way too late. It seems like a good thing, don’t get me wrong. It just should have been a thing much sooner.

This won’t have much use for me, because I’ve already unfollowed or blocked the people whose comments I’d least like to see. What I’d really like is a pop-up kind of like what Netflix has, that alerts me after I’ve been scrolling for more than 15 minutes… “Maybe it’s time for a break?” Or the ability to customize a pop up for when I visit one of my frenemies’ accounts… “Remember why you unfollowed this person??” But I could see it being useful for a teenager who gets bombarded with bullying messages. It’s a start, at least.

Nextdoor, essentially a neighborhood-specific Facebook/Reddit hybrid, did recently release prompts that might encourage empathyLike all social media platforms, Nextdoor has gained a reputation for fostering nastiness, NIMBYism, and even racism. So it launched a “kindness reminder,” which pops up to let you know if your reply to someone’s comment “looks similar to content that’s been reported in the past” and gives you a chance to re-read the community guidelines and rephrase your comment.

Nextdoor says the feature is meant to “encourage positivity across the Nextdoor platform,” but they also seem to suggest that it will make neighborhoods themselves more kind. They claim that in early tests of the feature, 1 in 5 people chose to edit their comments, “resulting in 2-% fewer negative comments” (though it’s not clear to me exactly how they measure negativity). They also claim the Kindness Reminder gets prompted less over time in areas where it’s been tested.

This, like Instagram’s Restricted feature, is an example of a social media company responding to many, many, many complaints of negative behavior and impact. But in Nextdoor’s case, there at least seems to be more transparency. In their post explaining the new feature, Nextdoor says the company built an advisory panel of experts, including Dr. Jennifer Eberhardt, a social scientist who wrote a book about racial bias. There was apparently a session with some of Eberhardt’s students in which Nextdoor employees (executives? unclear) shared their experiences with bias in their own lives as well as on the platform. So, that’s something. If nothing else, I could imagine the Kindness Reminder at least making me stop for a second before dashing off a snarky comment, something that doesn’t happen as much as it used to but is still an unfortunate possibility for me…

One big question about all of this, of course, is why can’t we just use our internal “kindness reminders”? Most of us do have them, after all. But it’s hard when, as Eberhardt notes in the Nextdoor press release: “the problems that we have out in the world and in society make their way online where you’re encouraged to respond quickly and without thinking.” We can create as many empathy-focused tools as we want, but as long as that’s the case, there will always be more work to do.

 

*When I first started writing about this stuff, the concept seemed new to a lot of people and it seemed obvious that the words “ostensibly” or “supposedly” or “hopefully” were implied. Today, not so much, for good reason: a lot of tech that’s advertised as empathetic seems more invasive or manipulative. So, I hope you will trust me when I say I understand that context, and I think about the phrase “empathy-building tech” as having an asterisk most of the time.

Frenemy of the People

Are you a real millennial if you don’t have your own podcast? Well…I’m about to find out. Last week I launched Frenemy of the People, a podcast about journalism and trust. It includes conversations with reporters and editors about the work they do, plus broader discussions about “the media,” how readers/viewers/listeners relate to it, and vice versa.

FOTPart

You can hear the teaser here now, and the first episode should be dropping tomorrow, October 1!

This was one of those projects that just kept tugging at me, even when I tried to convince myself that it wouldn’t be worth the time/potential blowback. But eventually I felt like I couldn’t not do it, so I did.

I’m still figuring out the whole audio production thing. Believe it or not a big part of my graduate program was focused on audio production, but back then I had access to much better software… which reminds me, if you like the podcast – or even just the idea of it – and want it to be even better, please consider contributing via Patreon.

 

The Facebook Supreme Court

blur close up focus gavel
Photo by Pixabay on Pexels.com

Yesterday Facebook officially launched its Oversight Board, an independent body that will make decisions about what can and cannot be posted on Facebook and hear appeals from people whose posts have been taken down. It’s been compared to the Supreme Court, the top appeals court in the United States justice labyrinth.

Like the Supreme Court, Facebook says the Oversight Board will create precedent, meaning earlier decisions will be used to shape later ones, so they aren’t reinventing the wheel every time. Also like the Supreme Court, the Board will try to come to consensus, but when everyone can’t agree, the majority will make the decision and those who dissent can include their reasons in the final decision.

Unlike the Supreme Court though, the Oversight Board’s members won’t be nominated by the president…I mean CEO, Mark Zuckerberg. He’s only appointing the two co-chairs, and it will be up to them to choose the rest of the 11-person board (it will get bigger as time goes on, according to the charter).

According to Facebook:

The purpose of the board is to protect free expression by making principled, independent decisions about important pieces of content and by issuing policy advisory opinions on Facebook’s content policies.

How will they choose what pieces of content are “important” enough to get an official ruling? The process is laid out in a post in Facebook’s newsroom. Cases referred to the Board will be those that involve “real-world impact, in terms of severity, scale and relevance to public discourse,” and that are “disputed, the decision is uncertain and/or the values involved are competing.”

I’m spitballing here, but my guess is that means it woouldn’t include your aunt posting confederate flag memes to her 12 followers, but it might include a politician who posts the same to their thousands of followers. My guess is that other cases will include things like body positivity posts that have been reported and taken down, like this one on Instagram (which is owned by Facebook).

In a blog post introducing the Board, Zuckerberg said it will start with “a small number of cases,” and admitted there’s still a lot of work to be done before it’s operational. I couldn’t find a method of actually submitting a case, for example.

The big question I ask myself when I see things like this: Do I think it is an empathetic use of technology? Do I think it shows an understanding of – and compassion for – users’ experiences and concerns? And do I think it will encourage users to be more empathetic themselves?

In some ways yes; almost; and maybe.

I do not think Zuckerberg ever expected to be tasked with arbitrating free speech on the internet. But he’s here now, and he’s getting a lot of pressure from politicians of all stripes to do something about harassment, privacy violations, and alleged censorship. Not to mention the fact that some lawmakers (and constituents, and former Facebook employees) want to break up the company’s ostensible monopoly on social media discourse. It’s all eyes on Zuck. His response to the free speech stuff has long been that it’s not his job to make those decisions. He has said he wants governments to make it clearer what’s okay to post online and what’s not. But by virtue of global politics and Facebook’s size and influence, the company is already making these decisions every day whether he likes it or not.

So I think a Supreme Court-style Oversight Board that can make binding decisions he cannot veto is smart. I think it could assuage some of his critics and make certain people feel more comfortable using the platform. I think it’s more self-preservation than empathy, but I think the effect could be an empathetic one if all goes well. But I also think it’s a HUGE undertaking that could go sideways pretty easily.

An internet appeals court is a real, tangible thing Facebook can give us, and it can have real, tangible results – controversial though they will be. Assurance that we won’t be manipulated by Macedonian trolls or bullied by classmates, or that we can post about our lives and ideas without unwittingly entering the thunderdome, is a lot harder to give.

Hi.

I took the summer off from blogging. Instead I worked a lot, read a lot, spent time with family, bought a house, watched baseball, hung out with friends, celebrated new babies (not mine), and started a newsletter.

I’m in a weird limbo right now, creatively. I got the advanced reader copies of my book and I simultaneously feel very overwhelmed by that and very unmoored by not having a book to work on.

I could, of course, start working on a new book. But… I can’t figure out what to write about. The problem isn’t a lack of ideas, it’s too many. There are a couple that stick out, but whenever I daydream about them for too long the mean editor part of my brain pokes me and says, “yeah but that doesn’t matter.”

I’ve been toying with a couple of fiction ideas, but is that what I should be spending my time on when there are important nonfiction things to write about?

I have a handful of nonfiction ideas, but they aren’t relevant to any of the Major Crises facing our world right now – so I shouldn’t bother, right? That would be a waste.

Sounding a little self-important, huh? A little neurotic. Well, that’s my brain!

Anyway. I’ll figure it out. At some point I’ll realize what I actually want to spend a huge chunk of time and energy on, and I’ll do it.

In the meantime, I’m working really hard at my day job, taking on some new volunteer responsibilities, and working on a podcast project. Oh, and I deleted Instagram from my phone again…!

 

 

 

Woulda, shoulda, coulda

Twitter co-founder Ev Williams posted a thread yesterday. Not super surprising, since he’s one of the fathers of Twitter, but as he explained in said thread, he doesn’t post his thoughts there much. He sticks to links, because he, “[doesn’t] enjoy debating with strangers in a public setting” and he “always preferred to think of [Twitter] as an information network, rather than a social network.”

That definitely elicited some eye-rolls, but this was the tweet – in a long thread about how he wants reporters to stop asking him how to fix Twitter’s abuse problems – that really caught my eye…

That is… exactly the problem! It’s both reassuring to see this apparent self-awareness, and frustrating how late it’s come, and how defensive he still is…

Maybe he feels like he can’t say for sure whether being more aware of how people “not like him” were being treated or having a more diverse leadership team or board would have led the company to tackle abuse sooner…. but those of us who are “not like him” are pretty confident it would have. Or at least it could have. It should have.

This is what I mean when I talk about a lack of empathy in tech. I don’t know Ev Williams or any of his co-founders; I don’t know many people who have founded anything at all. And I understand that founders and developers are people deserving of empathy too. As I read Williams’s thread, I tried to put myself in his shoes, even as I resisted accepting much of what he was saying. I get that “trying to make the damn thing work” must have been a monumental task. But as I talk about here a lot – there’s empathy, and then there’s sympathy. And as Dylan Marron likes to say, empathy is not endorsement. I can imagine it, but I don’t get it. And it’s little solace to the hundreds of people who are harassed and abused via Twitter every day to hear it confirmed that their safety wasn’t a priority, whatever the reason.

They know this – we know this. The question is, what now? Williams, for his part, brushes off this question. It’s not his problem anymore, he seems to say, and he doesn’t know how to fix it, but if you have any “constructive ideas,” you should let Twitter know (or write about them on Medium, Williams’s other tech baby…)

The toxicity that Williams says he’s trying to avoid – that he says his famous friend is very upset by, that he seems almost ready to acknowledge is doing real damage to many, many other people who use Twitter – was part of what inspired me to write The Future of Feeling. I wanted to know, if it’s this bad right now, how much worse could it get? Is anyone anyone trying to stop this train?

I talked to a lot of people in my reporting for the book, and over and over again I heard the same idea echoed: empathy has to be part of the fabric of any new technology. It has to be present in the foundation. It has to be a core piece of the mission. Creating a thing for the sake of creating the thing isn’t good enough anymore. (Frankly, it never was.) The thing you create is very likely to take on a life of its own. You need to give it some soul, too.

Williams ended his thread with a tweet that actually resonated with me. It’s something I’ve found to be absolutely true:

People made this mess. People will have to clean it up. If Williams doesn’t want to, or know how to, I know a lot of other folks who are getting their hands dirty giving it a try.

Droning on

Hello! Good morning. Let’s talk about drones.

Earlier this year, not long after Christmas, my husband and I went with one of our best friends to a historic village in North Carolina. We hadn’t been there since we were kids and wanted to experience it as adults. (See: walking into a building labeled “tavern” and walking right back out, dejected that there were no actual beers to be had.)

About halfway through the day, we exited an old building into a side yard just in time to see a drone taking off. The guy manning it was just a few feet away. He launched it off the ground and into the air, and I had two simultaneous thoughts:

“Wow, he’s gonna get some awesome photos of this place” and

“Wow, that sound is really, REALLY annoying, especially here!”

Such is the conundrum of life in 2019. There are so many tech things that make our lives cooler, easier, or safer while also being annoying, intrusive, or otherwise harmful. In the past I don’t think the developers of these technologies have done a great job anticipating future issues or needs. I do think that’s changing. But in the meantime, these are the kinds of things we have to deal with (and frankly, we probably will always have some degree of this issue).

I was recently reporting a piece about medical drones (coming soon) and came across this study that determined drones to be the most annoying of all vehicles. And that’s saying a lot, considering we also have motorcycles and 18-wheelers below them and airplanes above.

From a great New Scientist piece on the study:

“We didn’t go into this test thinking there would be this significant difference,” says study coauthor Andrew Christian of NASA’s Langley Research Center, Virginia. It is almost unfortunate the research has turned up this difference in annoyance levels, he adds, as its purpose was merely to prove that Langley’s acoustics research facilities could contribute to NASA’s wider efforts to study drones.

It’s a bummer all around, really. The study found that people (only 38 people, but still) experienced drone buzzes in a similar way they would experience a car that was twice as close as normal. These people didn’t even know what they were listening to, by the way, so we can’t just assume they’re anti-drone.

The piece I’ve been reporting is about the use of drones to save time and money moving blood samples and medical supplies. I wonder if people might find drones less annoying if they knew they were up there to help people? I hope that research is being done somewhere (I would not be surprised, as NASA and the FAA are doing a lot of work to study drone impact right now).

But even if we can get used to the sound of drones, or assuage ourselves with the thought that some of them are saving lives, we still have to look at them. It bugged me to see a black plastic mini-spaceship buzzing around a historic village, but it didn’t scare me or make me feel unsafe. Driving down the road and suddenly seeing a flock of them overhead, and not necessarily knowing their purpose…. would be a different story.