You’re probably already media literate – trust your instincts

This week is Media Literacy Week in the U.S., and that means lots of people will talk about how important it is to be able to tell the difference between facts and fake news, especially online.

It’s true. It’s really important. But it’s always been difficult, and it’s becoming harder and harder as we get more and more of our information via social media. This problem is becoming increasingly apparent as we inch closer to the 2020 presidential election. If you aren’t obsessively reading about this (in which case, I envy you) you might have missed that Mark Zuckerberg, head of Facebook, recently said his platform will not be taking action against political ads that contain lies.

In statements last week, he said he’s really concerned about the “erosion of truth,” but he just can’t let Facebook be the arbiter of right and wrong by taking down political ads that contain false statements. One of his primary arguments is that the FCC requires radio and television stations to give candidates equal time, but Zuckerberg also likes to claim Facebook is not a media company… But it’s this “we’re not the arbiter of truth” piece that feels most troubling to me.

It’s a very familiar argument, similar to what I’ve heard from individuals who’ve decided they don’t trust any mainstream media source: “We can’t trust one arbiter of truth, so we really can’t trust any, and we’ll never now what’s ‘true,’ so why bother worrying about it?” Usually I would get a message like this after gently suggesting to an acquaintance or distant family member that a link to InfoWars or NaturalNews or Prager U might be misleading.

Sometimes journalists get a little resentful about this stuff, which, as a journalist, I get. But I also get not wanting to be condescended to about what’s “true,” and I get that there are so many information sources out there, it can be truly impossible to sift through it all without spending a lot of time and energy. I also get that some people might have that time and energy, but choose to spend it finding things that confirm what they already believe – it’s a free country, so I won’t try to talk you out of it.

But… I think that deep down most people really do care about facts, and really don’t like being lied to. Yes, politics is dirty, and media can be too. But throwing the proverbial baby out with the bathwater in these two areas is dangerous. Media is meant to hold the powerful accountable. Facebook can’t decide if it’s a member of the media, or one of the powerful, or both. It might feel like we, lowly civilians, can’t figure that out for them or do anything about it, but what I want you to think about on this media literacy day is that we can.

Media literacy doesn’t have to imply you’re illiterate about the media, or that you need to take some kind of formal class or workshop to understand what’s going on. For most people – people who want to know what’s true but are just a little overwhelmed – it’s about trusting your instincts.

Does a headline seem too good or bad or crazy to be true? It probably is. You can check by looking at the URL, reading the story, and clicking on links within it.

Are you skeptical of the way something is being framed? That’s great insight. You can read articles by other publications about the same topic to round out your exposure to the story and see what makes sense to you.

You’re still going to suffer from confirmation bias – we all want to believe what we want to believe. But I think being intentional about this, recognizing when we’re maybe understanding something based more on our wishes than the facts in front of us, will make all the difference.

It’s true – existentially, it’s hard to know what’s objectively, 100%, no-doubt true. But that’s not what media literacy is about. It’s about knowing what happened, who did it, and maybe why. Sometimes answering those questions takes more than one tweet or article or even one year of reporting and reading. That’s okay – that’s how it’s always been. Getting comfortable with not knowing some things for sure, but being pretty confident you’re following along, is half the battle.


  • Subscribe to The Flip Side, a newsletter that shows you how the right, left, and center are covering various big news items (especially political stuff). It doesn’t always make me feel like I know what’s true for certain, but it helps me understand better the way things are being framed and why.
  • Take this News Literacy Quiz. Fun fact – I didn’t pass the first time I took it myself!
  • Read these 8 ways to tell if a website is reliable.
  • Subscribe to the news sources you use most, and/or sign up for their newsletters so you get the information right in your inbox, rather than through the filter of your social media feed.


Instagram, Nextdoor, and “Be Nice” Nudges

One of the first pieces of empathy-building tech* I wrote about was an algorithm built to recognize when comments on a newspaper story went off the rails. It was a tough story to place because it was hard to understand and even harder to explain. (I’m forever grateful for good editors!) The gist was that a group of researchers wanted to see if they could cultivate an environment in the comment section of a controversial story that would facilitate good, productive conversation. Their work eventually turned into Faciloscope, a tool aimed at detecting trolling behaviors and mediating them.

Like many research projects, it’s kind of hard to tell what happened after the initial buzz – grants change, people move, tech evolves, etc. All’s been pretty quiet on the automated comment section management front for a while, but over the past few months that’s begun to change. Now we can see similar technology popping up in the apps we use every day.

Photo by Randalyn Hill on Unsplash

Earlier this year, Head of Instagram Adam Mosseri announced that the app would soon have new features to help prevent bullying. The official plan was released yesterday, and it boils down to one new function: Restrict. According to Instagram, “Restrict is designed to empower you to quietly protect your account while still keeping an eye on a bully.” It works letting you approve Restricted people’s comments on your posts before they appear – and you can decide to delete or ignore them without even reading them too, if you want. You won’t get notifications for these comments, so it’s unclear to me how you’d know they happened unless you went looking for them, which hopefully you aren’t doing, but let’s be honest… we all do that

Anyway, what about direct messages? DMs from Restricted people will turn into “message requests,” like what already happens when someone you don’t know sends you a message. The sender won’t be able to see if you’ve read their message.

Inexplicably, Instagram also used this announcement to tell us about its new “Create Don’t Hate” sticker, as if that’s an anti-bullying feature… when it’s literally just a sticker you can put on your story. So… okay, cool?

I wouldn’t exactly call this empathy-building tech, but I would hear an argument that it’s an example of tech showing empathy for its users, with the usual caveat that this is probably way too little, way too late. It seems like a good thing, don’t get me wrong. It just should have been a thing much sooner.

This won’t have much use for me, because I’ve already unfollowed or blocked the people whose comments I’d least like to see. What I’d really like is a pop-up kind of like what Netflix has, that alerts me after I’ve been scrolling for more than 15 minutes… “Maybe it’s time for a break?” Or the ability to customize a pop up for when I visit one of my frenemies’ accounts… “Remember why you unfollowed this person??” But I could see it being useful for a teenager who gets bombarded with bullying messages. It’s a start, at least.

Nextdoor, essentially a neighborhood-specific Facebook/Reddit hybrid, did recently release prompts that might encourage empathyLike all social media platforms, Nextdoor has gained a reputation for fostering nastiness, NIMBYism, and even racism. So it launched a “kindness reminder,” which pops up to let you know if your reply to someone’s comment “looks similar to content that’s been reported in the past” and gives you a chance to re-read the community guidelines and rephrase your comment.

Nextdoor says the feature is meant to “encourage positivity across the Nextdoor platform,” but they also seem to suggest that it will make neighborhoods themselves more kind. They claim that in early tests of the feature, 1 in 5 people chose to edit their comments, “resulting in 2-% fewer negative comments” (though it’s not clear to me exactly how they measure negativity). They also claim the Kindness Reminder gets prompted less over time in areas where it’s been tested.

This, like Instagram’s Restricted feature, is an example of a social media company responding to many, many, many complaints of negative behavior and impact. But in Nextdoor’s case, there at least seems to be more transparency. In their post explaining the new feature, Nextdoor says the company built an advisory panel of experts, including Dr. Jennifer Eberhardt, a social scientist who wrote a book about racial bias. There was apparently a session with some of Eberhardt’s students in which Nextdoor employees (executives? unclear) shared their experiences with bias in their own lives as well as on the platform. So, that’s something. If nothing else, I could imagine the Kindness Reminder at least making me stop for a second before dashing off a snarky comment, something that doesn’t happen as much as it used to but is still an unfortunate possibility for me…

One big question about all of this, of course, is why can’t we just use our internal “kindness reminders”? Most of us do have them, after all. But it’s hard when, as Eberhardt notes in the Nextdoor press release: “the problems that we have out in the world and in society make their way online where you’re encouraged to respond quickly and without thinking.” We can create as many empathy-focused tools as we want, but as long as that’s the case, there will always be more work to do.


*When I first started writing about this stuff, the concept seemed new to a lot of people and it seemed obvious that the words “ostensibly” or “supposedly” or “hopefully” were implied. Today, not so much, for good reason: a lot of tech that’s advertised as empathetic seems more invasive or manipulative. So, I hope you will trust me when I say I understand that context, and I think about the phrase “empathy-building tech” as having an asterisk most of the time.

The Facebook Supreme Court

blur close up focus gavel
Photo by Pixabay on

Yesterday Facebook officially launched its Oversight Board, an independent body that will make decisions about what can and cannot be posted on Facebook and hear appeals from people whose posts have been taken down. It’s been compared to the Supreme Court, the top appeals court in the United States justice labyrinth.

Like the Supreme Court, Facebook says the Oversight Board will create precedent, meaning earlier decisions will be used to shape later ones, so they aren’t reinventing the wheel every time. Also like the Supreme Court, the Board will try to come to consensus, but when everyone can’t agree, the majority will make the decision and those who dissent can include their reasons in the final decision.

Unlike the Supreme Court though, the Oversight Board’s members won’t be nominated by the president…I mean CEO, Mark Zuckerberg. He’s only appointing the two co-chairs, and it will be up to them to choose the rest of the 11-person board (it will get bigger as time goes on, according to the charter).

According to Facebook:

The purpose of the board is to protect free expression by making principled, independent decisions about important pieces of content and by issuing policy advisory opinions on Facebook’s content policies.

How will they choose what pieces of content are “important” enough to get an official ruling? The process is laid out in a post in Facebook’s newsroom. Cases referred to the Board will be those that involve “real-world impact, in terms of severity, scale and relevance to public discourse,” and that are “disputed, the decision is uncertain and/or the values involved are competing.”

I’m spitballing here, but my guess is that means it woouldn’t include your aunt posting confederate flag memes to her 12 followers, but it might include a politician who posts the same to their thousands of followers. My guess is that other cases will include things like body positivity posts that have been reported and taken down, like this one on Instagram (which is owned by Facebook).

In a blog post introducing the Board, Zuckerberg said it will start with “a small number of cases,” and admitted there’s still a lot of work to be done before it’s operational. I couldn’t find a method of actually submitting a case, for example.

The big question I ask myself when I see things like this: Do I think it is an empathetic use of technology? Do I think it shows an understanding of – and compassion for – users’ experiences and concerns? And do I think it will encourage users to be more empathetic themselves?

In some ways yes; almost; and maybe.

I do not think Zuckerberg ever expected to be tasked with arbitrating free speech on the internet. But he’s here now, and he’s getting a lot of pressure from politicians of all stripes to do something about harassment, privacy violations, and alleged censorship. Not to mention the fact that some lawmakers (and constituents, and former Facebook employees) want to break up the company’s ostensible monopoly on social media discourse. It’s all eyes on Zuck. His response to the free speech stuff has long been that it’s not his job to make those decisions. He has said he wants governments to make it clearer what’s okay to post online and what’s not. But by virtue of global politics and Facebook’s size and influence, the company is already making these decisions every day whether he likes it or not.

So I think a Supreme Court-style Oversight Board that can make binding decisions he cannot veto is smart. I think it could assuage some of his critics and make certain people feel more comfortable using the platform. I think it’s more self-preservation than empathy, but I think the effect could be an empathetic one if all goes well. But I also think it’s a HUGE undertaking that could go sideways pretty easily.

An internet appeals court is a real, tangible thing Facebook can give us, and it can have real, tangible results – controversial though they will be. Assurance that we won’t be manipulated by Macedonian trolls or bullied by classmates, or that we can post about our lives and ideas without unwittingly entering the thunderdome, is a lot harder to give.

Musings on The American Meme and my Instagram Addiction

In the past 7 days, I’ve spent 8 hours and 35 minutes on Instagram, according to my phone’s Screen Time tracker. That’s an entire workday’s worth of minutes watching celebrities talk and friends feed their babies and advertisers try desperately to get me to buy Allbirds shoes (at this point I’m not buying them on principle). And my usage is down 11% from last week!

I know that I have a problem. It’s not that I can’t go an hour without looking at Instagram. I could put my phone in my purse and stare harder at my computer screen, or go for a walk, or sit and think for a few minutes about what’s actually behind my urge to open the app. I’ve spent enough time thinking about this that I’m pretty sure I know the answer to that, though: I’m anxious, bored, sad, frustrated, or tired. Instagram has become a little security blanket for me. It’s a place to get lost in other people’s lives for a few (or 30) minutes at a time so I don’t have to consciously think about what’s bothering me or, more importantly, do anything about it.

Yes, this is terrible! I sound like a jerk. The worst part is that now that I’ve psychoanalyzed myself to the point of understanding this, almost every time I open the app I feel guilt on top of it all. I should be treating myself better. I should be more authentic. I should be spending more time on actual work. This spiral is exhausting, and that feeling just makes me want to see if any of the people I follow have posted a new Instagram Story while I’ve been typing this…

I’m not unique in this. Instagram and its fellow social media platforms were built to become indispensable to us in this way, to cause little dopamine rushes that keep us coming back. Maybe that’s sinister, or maybe it’s just business.

On Sunday night I tried to put my phone away for a little while and watch a documentary. Naturally, the doc I chose was Netflix’s The American Meme. It’s essentially Behind the Music, for social media influencers – people who hawk brands and destinations and their own lives for money on platforms like Instagram and Snapchat (and formerly Vine, RIP).

The doc follows a few different influencers, some I had heard of and some I hadn’t. I was most surprised by how much I learned about Paris Hilton, and what a sympathetic character she was, especially in comparison to some of  the other people in the film. I had heard of comedian(?) “The Fat Jewish” before, and even followed him for a little while until he was outed for stealing other people’s memes and passing them off as his own. When the interviewers asked him about this in the documentary, his answer was basically, “yeah, so?” Among other things, he now runs an apparently very successful wine business. Lesson (from TFJ and several of the others): lying sells!

Is this new? No. But as with a lot of millennial-focused content, what’s unique is the sense of nihilism that permeates this documentary. There’s a feeling that nothing matters, nothing is real, no one actually cares about anything or anyone, so why not spend your nights pouring champagne on women’s bare asses at night clubs and making fun of fat people for money? Why not create elaborate hoaxes with celebrities and trick entertainment news organizations into covering them as if they’re real for attention? Why not do the most ridiculous and physically dangerous stunt you can think of, for followers?

One of the things that struck me most was a quote from the mother of Kirill “slutwhisperer” Bichutsky, who, defending what her son does for a living, said something along the lines of, “he’s like an actor playing a bad person – you don’t judge the actor as if they really are that person.” Don’t we? Where is the line, really? I’m not an influencer, but should I be judged by how I present myself online, or in person? Is there actually a difference? It seems to depend who you ask.

I didn’t want to relate to these people, but ultimately I couldn’t help it. The story of Kirill, a photographer and Instagram influencer who pours champagne on women’s asses and calls them sluts, among other charming things, broke through to my empathic heart despite my best efforts. The Kirill in this documentary is exhausted, ashamed, and depressed. He seems like he’s ready to give up being an asshole for a living and meet someone he can make a life with. He says this is what he does because it’s what he has to do – because he doesn’t know how to do anything else. I feel trapped by social media because it helps me escape, but I can’t imagine feeling like I truly had no other choice.

When Kirill posted something that made it seem like he might be suicidal, fans told him not to kill himself – they still wanted to party. He was 33 when the doc was being filmed, in 2017. After watching, I wondered if he’d hung up his champagne bottles, but a glimpse at Instagram shows that slutwhisperer is alive and well, with a new slogan: Assholes Live Forever.

There’s no big lesson from The American Meme. It probably doesn’t teach you anything you don’t already know if you follow these people. But watching it felt like it might have felt to watch a Behind the Music about a drug-fueled 1970s band in the middle of the 1970s. That’s one of the wildest things about our media landscape now – we can analyze things so much more easily in real time. We can watch ourselves be taken over by “addiction” to social media, realize it’s happening, but not really know how to get away from it.

At the end of last year I finally deactivated my Facebook. I don’t miss it at all. But that’s partly because most of the people I was interested in following there had migrated to Instagram. Over the past year I have also spent a lot more time with people in real life – coffee dates, dinners, book clubs. I wonder, if I gave up on Instagram too, would my obsession turn to in-person hangouts? Or would I finally succumb to Snapchat?

Anyway, it’s been a long day (and a long post). I’m really looking forward to going home, sitting on the couch, and catching up on Instagram Stories. Maybe that’s OK. Maybe it will help me relax. More likely it will make me feel anxious and lacking. But I’ll do it anyway.

Facebook and the first amendment

Looks like I’m not the only one trying to figure out how and why things happen on Facebook. The U.S. Supreme Court is paying a lot of attention to the social network right now, but the stakes are a little higher than my “can I be calmer and happier without it” experiment. SCOTUS is in the middle of hearing a case that centers on whether and when a Facebook rant morphs from obnoxious but First Amendment-abiding screed to illegal threat.

In Elonis v. United Statesthe government argues that if a “reasonable person” would interpret a Facebook post as a threat, the poster should be subject to a criminal conviction. The lawyer for the man whose Facebook posts are at issue in this case, however, argues that the authorities should have to prove that the poster intended his or her words to be taken as a threat.

After oral argument on Monday, observers said the biggest stumbling block seemed to be finding a legal standard of proof. The problem arises from the court’s reading of the relevant law. The law says threatening someone is illegal, but the court has determined that this only applies to “true threats.” But it isn’t completely sure what it means by that…

Once that, and the definition of a “reasonable person,” get sorted out, it’s clear that the implications could be widespread. In this case, a Pennsylvania man named Anthony Elonis posted notoriously violent Eminem lyrics on his Facebook page, directing them at his estranged wife. His lawyers say posting rap lyrics is clearly for entertainment purposes only, but his wife and law enforcement officials felt differently. Elonis didn’t soak his wife’s body in blood from “all the little cuts,” as the lyrics suggested he might. Would he have done it if the police weren’t called? What was his actual intent? Is it possible to know? And is it possible to know how many actual violent crimes have been committed after similar social media posts? Eliot Rodger left behind some frightening tweets and YouTube videos, and many people questioned whether stricter and more clear guidelines surrounding online threats might have prevented his rampage.

Though confusion abounds, Justice Scalia did suggest on Monday that he might be leaning more toward the government’s side in this case.

“This sounds like a road map for threatening a spouse and getting away with it,” he said during the hearing, according to CNN.

So, if you’re still on that soul-sucking site, be careful what you post. (And, of course, it’s generally good practice not to threaten people anywhere, online or off!)

4 days without Facebook

I never thought of myself as someone with an addictive personality. I tend to get really excited about things — hobbies, television shows, fashion trends — for a short period of time and then get bored of them relatively quickly. I’m an absent-minded perfectionist, a picky consumer of culture, a bit slow on the trend uptake, but an addict I am not. I thought about this a lot around the time I got my first iPhone a couple of years ago, a couple of years behind most of my friends. The great thing about the iPhone, everyone said, was that you could have all of your social media in one place, at your beck and call whenever you wanted to look. I didn’t understand at the time why that was such a plus; I didn’t spend all that much time on Facebook or Twitter, and had never used Instagram. I wouldn’t be one of those people constantly glancing down at their phones, I swore.

I was, of course, wrong. It didn’t take long before I got a rush of adrenaline (and likely oxytocin) whenever a little red notification popped up, and I eventually found myself mindlessly scrolling through my Facebook feed in particular even when I knew there was nothing new or interesting to see. I sometimes felt that I had “FOMO,” Fear Of Missing Out, or just that I needed a distraction, even when all I was doing was watching a movie or eating dinner or, I’m ashamed to say, working.

So do I have an addictive personality after all? I’m not sure. Is there something about social media — and Facebook in particular — and the way feeds are curated and participation rewarded that keeps people, whatever their disposition, coming back for more, even when it’s counterproductive? I think so.

This past week, anyone who uses Facebook likely saw a phenomenon that has become commonplace in the world of social media in the wake of a disaster or tragedy or other headline-making event. Facebook posts and interactions in the wake of the grand jury’s decision not to indict Darren Wilson in the killing of Michael Brown could (and probably will) be studied by social scientists, psychologists, political scientists and anthropologists alike. I have conflicting feelings about the necessity and efficacy of discussing things like this on Facebook. On one hand, Facebook is the main form of communication for a lot of people, and can provide an opportunity for exposure to information and opinions one might not otherwise encounter. On the other hand, people are already prone to digging their heels in on issues of politics and morality, and there’s a lot of convincing evidence (anecdotal, but scientific as well) that sitting behind a computer screen with the ability to type anything and the feeling that something must be typed immediately and often does not bode well for conversation about anything, let alone issues as controversial and multi-layered as what has happened and continues to happen in Ferguson. There’s also evidence that it makes us depressed. I was starting to believe that last bit.

So on Wednesday, as I was wrapping up my work before the holiday weekend, I used one of my Facebook detours to post a message that said I would be leaving for a while. (We could probably have a separate discussion entirely about why I felt the need to do that, and whether anyone cared, but that’s for another post.) I also deleted the Facebook app from my phone. It’s now Sunday afternoon, and I haven’t been on Facebook since.

It’s been relatively easy so far, since I’ve been spending time with family, Christmas shopping and relaxing, but I’ve also noticed some major changes. I feel calmer. I feel less anxious, which for me is really saying something. I feel like I am sleeping better. My blood pressure is lower (at least according to my at-home testing cuff.) I’ve been more productive, even in “vacation mode.” And I don’t find myself with FOMO at all. The friends I care the most about have stayed in touch via text message, I’ve kept up with news via Twitter (which doesn’t have the same addictive effect on me, for various reasons), and I generally feel happier.

Facebook has made an effort to interrupt my break, though. Yesterday I got an email letting me know that I had 18 “notifications.” This isn’t actually true; I have notification emails turned off. The email was really a sneaky way to try to get me to come back to the site to see what I’d “missed” over the last few days. I resisted.

But it’s only Day 4. Stay tuned for an update.

on voter privilege

Last night was hectic. I had a great weekend with family visiting and a fun work event Monday night, but that meant chores piled up. I got home from work at 7:30 and had to get my laundry to the laundromat before “last wash” at 8:30. I gathered everything up, stuffed it in the bag, stuffed that in the granny cart and set about the careful process of bumping it down two flights of stairs and across three long blocks. While the laundry was washing I made a quick trip to the grocery store, something else I hadn’t had a chance to do over the weekend. I made it back just in time to transfer my clothes to the dryer, and while they were drying I realized – I forgot to vote!

I’m not sure how. Every time I checked out Facebook or Twitter during the day I was accosted by dozens of reminders, both in text and image, in the form of friends’ “I voted!” stickers.

By that time it was 8:30 and I knew the polls stayed open until 9:00. Lucky for me, my polling place is literally a block from my apartment, so I strolled in at 8:34 and was out by 8:44, in time to eat some dinner before picking up the laundry.

When I got home, as I stood looking at the heap of clean, dry clothes on my bed (and my cat rummaging around in them for a good place to nap), I realized that what I’d just done was nothing compared to what some people go through to make it to the polls. And it was really nothing compared to what keeps a lot of people from voting at all.

For a lot of people (at least, judging by my Facebook and Twitter feeds) it’s a no-brainer: vote or die. Or at least, vote or be ridiculed on social media and prohibited from complaining about the government.

But, in reality, there are a lot of reasons not to vote that I don’t think most people ranting on social media think about. What if you work three or four jobs and simply don’t get a chance to get to the polls?

“Absentee ballots!” I can hear the chorus and see the eye rolls.

But what if you had four kids, three jobs, no car, and little family to help you out? Would “making the time” to read up on electoral issues and requesting, filling out and sending back an absentee ballot be a high priority? It may seem like an extreme example, but it’s a reality for a lot of people in this country. And it’s only one of a myriad of things that might keep someone from voting even if they want to.

To call my night “hectic” was a vast overstatement. I had virtually no barriers to voting. As with many other privileges, this can make it hard to understand why someone else might not make the same decisions we do. And especially with so much at stake – equal pay, access to abortions, environmental protection, accessible health care – passions can take over.

But before you ask someone if they voted, and if not, why not, consider that it’s not so simple for everyone. And if you do vote, consider giving your support to candidates who may help fix some of the underlying issues that prevent people from reaching the polls: unemployment, weak education and health care systems, transportation infrastructure problems and affordable housing shortages.

still here!

Whew. Been an exciting week and a half settling into the new job. Learning and doing a lot, and still working on balancing blogging and everything else, so I thought I’d share a link to what I have to show for my absence: The Facebook Ice Bucket Challenge for Investors.

I’ve been doing a lot more than that, but it feels good to have my first published piece out there. I talked to some analysts about what investors should really be paying attention to when it comes to Facebook, and got a crash course in mobile advertising and audience engagement. Zuckerberg and Sheryl Sandberg said during their earnings call that hey planned to spend a lot of money in the near term, pushing their stock price down a bit, but they sounded excited about growth. One thing a few analysts mentioned – and a few asked about on the call, to little avail – was video ads. Facebook pioneered the use of in-app install ads (the ones that pop up on your feed and ask you to download a game or another app) and many believe it’s poised to be the first to really figure out mobile video ads. We shall see!

Apple & Facebook’s “game changer”

Facebook and Apple have apparently decided to cover egg freezing for female employees. I have some thoughts about this….but first, a small note about my recent absence: I’m currently on vacation back home in North Carolina after finishing up my last couple of weeks of work at Law360. Next Monday, I’ll be starting a new job! It’s an exciting change, and the transition process has had me pretty busy lately. Thankfully I have a week to relax in between, and I’m trying to really do just that, but I couldn’t stay away from this space for long!

OK, down to business. I usually save topics like this for “Feminist Friday,” but every day this week is a basically Friday for me so when I came across this story I thought, why not? From NBC:

Facebook recently began covering egg freezing, and Apple will start in January, spokespeople for the companies told NBC News. The firms appear to be the first major employers to offer this coverage for non-medical reasons.

“Having a high-powered career and children is still a very hard thing to do,” said Brigitte Adams, an egg-freezing advocate and founder of the patient forum By offering this benefit, companies are investing in women, she said, and supporting them in carving out the lives they want.

In a vacuum, this policy seems like it could only be a good thing. If women want or need to freeze their eggs so that they can get pregnant at a later date, it’s great that huge companies like Facebook and Apple want to cover those procedures.

But is this really a “game-changing” perk, as NBC says? And if it is, what does that say about the state of things for women in the corporate and tech world? What does it mean when Facebook and Apple will spend hundreds of thousands of dollars to help women freeze their eggs so that they can put off pregnancy in favor of their careers?

With notoriously male-dominated Silicon Valley firms competing to attract top female talent, the coverage may give Apple and Facebook a leg up among the many women who devote key childbearing years to building careers. Covering egg freezing can be viewed as a type of “payback” for women’s commitment, said Philip Chenette, a fertility specialist in San Francisco.

This is probably great news for some women, but is painting it as the way to “attract top female talent” really the statement tech wants to be making? Doesn’t it suggest that career and child-rearing are mutually exclusive, and that the reason women don’t enter the field in the first place, or leave, is because they want to have children? Studies have shown that’s just not true in many cases. More women seem to leave because of the hostile culture of the corporate world, and when they do cite children as the reason, it’s often because of the stubborn patriarchal ideal that the mother should take on the majority of the childcare responsibilities.

Offering to cover the cost of freezing eggs is great, and I’m definitely not suggesting Facebook and Apple reverse course on this. But making such a commitment to what is a relatively uncommon and invasive procedure and suggesting that it’s some kind of solution or salve for the huge “woman problem” in the industry just feels wrong.

What might be better? I have a few ideas:

  • Better maternity and paternity leave policies and flexible work schedules
  • A campaign to combat the idea that pregnancy and motherhood somehow render women less capable of doing their jobs
  • A dedicated effort to addressing the sexism and harassment that is far too common in the tech industry
  • An honest, empathetic statement of acknowledgment of the other reasons women may leave the industry and a concerted effort aimed at fixing those problems

I’m happy for the women in tech who really want to freeze their eggs and now will have the support of their employers. But is this a “game-changer” for anyone else? I’d argue no.

Recognizing The Impact Of “Uncivil” Discourse Online

As someone who chooses to discuss her opinions online — usually on my Facebook wall after linking to an article that invokes thoughts of sexism, racism, environmental or legal issues — I’m used to having heated discussions with both friends and strangers on the internet.

The common advice for those who publish their work online is to not read comments at all, and for those who read online and discuss in forums like Facebook, the advice is “don’t feed the trolls.” In other words, don’t engage with people who are just being terrible for the sake of being terrible. Ostensibly because it will make you look bad yourself, and also because if you don’t pay attention to them, they’ll go away.

It’s pretty solid advice, depending on your definition of “troll.” But what used to refer to an anonymous commenter looking to derail any conversation at any cost seems now to apply to anyone who says something false, argumentative, hostile, racist, sexist or otherwise offensive.

Whenever someone tells me, “It’s just Facebook/Twitter/the internet! Who cares what they think?” I can’t help but feeling like we’ve slid back toward the belief that things said or published on the internet somehow “don’t count.”

Some of them don’t, of course. And when it comes to comments, there is certainly a healthy army of legitimate trolls ready and willing to fight for fighting’s sake.

But a great deal of legitimate discourse now takes place online, and we’ve spent years arguing that it is not cheapened by its location. Insisting that code can be as useful as the printed word in telling a story, convincing investors that a micro-blogging platform with a 140-word limit will encourage conversations and the free flow of information, demanding freedom to express ourselves here and be protected from hacking and censorship.

We need to also be aware that the people who harm true discourse offline — not the hecklers but the bigots, the manipulators, the willfully ignorant, those unwilling to hear the other side but insistent on proclaiming theirs — are present online as well, and are having an impact.

Last year, Popular Science famously turned off comments on its articles after finding that “even a fractious minority wields enough power to skew a reader’s perception of a story.”

Former digital editor Suzanne LaBarre wrote at the time of a recent study led by Dominique Brossard of University of Wisconsin-Madison which found that the prevalence of “uncivil comments” on an article about the risks of nanotechnology impacted readers’ perception of the information the article presented.

“Simply including an ad hominem attack in a reader comment was enough to make study participants think the downside of the reported technology was greater than they’d previously thought,” Dietram A. Scheufele wrote of the study in the Times.

The takeaway was that commenters shape public opinion. And there’s a good case for arguing that the people we call our friends — both literally and in the Facebook sense of the word — shape our own opinions and levels of understanding more than we might think, as Nicholas Christakis and James Fowler wrote in their 2011 book Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives — How Your Friends’ Friends’ Friends Affect Everything You Feel, Think and Do.

At this point in our cultural relationship with social media, it’s irresponsible to brush off ignorance, the spreading of false information, sexism, racism, hatefulness and threats as “something some idiot said on the internet.” The internet is our home, it’s where a growing number of us work, meet the loves of our lives and get the majority of our news.

Because of this, we need to recognize — and yes, attempt to ameliorate — threats to productive communication online. If we starve trolls, they may go away and bother someone else, never doing any “real” damage that we can see. But when we ignore the influence of lies, indignance and hostility and encourage others to do the same, we aren’t showing how we are “above arguing on the internet.” We’re helping to perpetuate ignorance.