Table of Contents
In July, amid the rise of the buzzy audio-only social network Clubhouse, some users reported being harassed by other members. This seemed obviously bad, but at the time the company had no guidelines about how users should behave on the site. Moderation duties were left to the two co-founders, then the company’s only employees, and it’s fair to say that enforcement was not their full-time focus.
When I wrote about the situation at Clubhouse, responses were divided. Some readers said that because the app was still in private beta, and had just two employees, users ought to cut it some slack. Moderation would come as the app scaled, they said, and to beat up the founders for not having every detail in place during the launch stage was unfair to the team.
The other view, which I took, is that any social product ought to begin with moderation in mind. We’re long enough into the history of these apps that we know many of the ways in which they will be used, and misused. To begin without a plan for dealing with those malicious uses is to walk down a path that predictably leads to misery.
Of course, easy for me to say: I’m not building a social app. But Richard Henry and Marc Bodnick are. The duo, who previously worked together at the question-and-answer community Quora, today announced a wider release for Telepath, a new app for discussing your interests. The app, which like Clubhouse is available only in private beta and requires an invitation to use, resembles a hybrid of Twitter and Reddit. As on Twitter, the app opens to a central scrolling feed of updates from people and topics that you follow. And as on Reddit, every post must be created within a group, which Telepath calls a “network.”
But what stands out about Telepath is its approach to moderation — which is both more aggressive and more constructive than any I have ever seen in a venture-backed social app at this stage of development.
As always, there are tradeoffs. Telepath requires users to use their real names, which makes internet usage more difficult for activists and dissidents. It requires the use of a real mobile number as well — the one you get from your carrier, not some VOIP burner number you get online. And perhaps most importantly, you have to act the way Telepath tells you to act. For example, there’s this:
Stay on-topic and tone. Some networks have a very clear topic, tone, and intent, and others are more broad. Don’t bombard an obviously pro-x network with an anti-x agenda, or vice-versa.
Don’t circle the drain. If you are in a contentious debate where anyone is repeating the same points, seems focused on having the last word, and/or is badgering others, Telepath may lock the thread to end the conversation.
It may go without saying, but Telepath is going well beyond the standard social network bans on hate speech and incitement to violence. This is a policy written by people who have argued online, and who want to create a place where those arguments are productive.
The rules quoted above are consistent with most social network community guidelines in one obvious way: they tell you what not to do in the app. But Telepath is also unusual in that its first rule gives you an actual behavioral north star for your time spent posting:
Be kind. Don’t be mean. Don’t attack people or insult what they post. Assume that other people have good intentions. If a reasonable person would think you’re being an asshole, that’s not okay. Persistent behavior that’s on the line is not okay.
If you’re the sort of person who spends a lot of time on Twitter, where attacking people and insulting what they post can sometimes feel like the entire point of the service, this rule may feel like a breath of fresh air. Twitter has spent more than two years now pondering how it might “serve healthy conversations” with little to show for it beyond improved abuse reporting and enforcement tools. Telepath, on the other hand, is just telling everyone to be nice to each other or it will kick them off the app.
Telepath arrived at the idea of kindness as a first principle after rejecting its cousin, “civility.”
“‘Civil’ feels like you’re trying to stay inside some yellow line,” Bodnick said. “And you know the kind of people that will try to get a centimeter from that line. But kindness is really what we want. We want people to give each other the benefit of the doubt. We’d like it to be a place where people can change their mind … But the only way you’re gonna get people to change their mind is if you have a tone that is gentle and empathetic.”
That all sounds true enough. But as I like to say, policy is what you enforce. So how is Telepath going to make this a reality?
One, the company plans to handle all of its moderation in-house, Henry and Bodnick told me. The company plans to add moderators as it grows, and assumes it will have lower profit margins eventually because of a heavy investment in full-time employees working on these issues.
The company’s first head of community, who currently oversees all enforcement on Telepath, is Tatiana Estévez, who also joined from Quora. In a Twitter thread today, Estévez discussed making Telepath feel like a good place for women as a way of nurturing the community more broadly. And unusually for a social network, Estévez talked about Telepath’s comfort with judging users’ intent when they post. If they think you’re being a jerk on purpose, they’re going to take action against you, no matter what you have to say about it. She writes:
One of the most important problems Telepath is focused on is making the community fun and safe for women. We want women to love Telepath, ideally more than the men do. Making women feel good/comfortable/confident — that’s critical to an awesome modern conversation community.
Given the complexity of online hostility toward women, our moderation philosophy focuses on *intent*. We won’t hesitate to be aggressive if we conclude a man has bad intentions. We won’t allow trolls/misogynists to get away with repeatedly trolling on the edge of our rules.
And what will you get, if you build an interest-based network where women feel just as comfortable as men, and people are required to be kind? Henry has an idea.
“We just really want make something that’s fun,” he said. Early Twitter felt that way to him, he said, as did early Quora, and he worked at both companies before starting Telepath.
“At this point I’ve spent my entire career working on social networks — for some reason,” Henry laughed. “And it’s just not worth doing it if it isn’t fun.”
I haven’t spent enough time on Telepath yet to know how fun it is. But I really couldn’t be more impressed with the approach this small team is taking to building trust and safety policies so early in its existence. One reason why I clamor for competition in social networks is to promote the generation of new and better ideas — and I think many of the ideas Telepath is working on deserve to be studied and then cloned by its peers. Even, and perhaps especially, its larger peers.
I met Henry and Bodnick today over Zoom, and was impressed with how fluent they are in the dynamics of modern online conversations and their clear perfectionist streak. They told me they rebuilt Telepath four times before getting it to this stage, and decided to launch more widely only after implementing a key privacy feature that sparked lots more sharing: conversations delete by default after 30 days, but save to your private archive.
There are other things to like here — at least as alternatives to the status quo. Telepath is a people-only network; there are no bots, organizations, or publishers permitted. The company has banned disinformation and the sharing of hoaxes. And it is specifically focused on removing or limiting the reach of modern behaviors that aren’t outright hateful, but are most often annoying: “sealioning, or man-splaining, or reply-guy-ing,” as Bodnick puts it.
One imperfect but helpful way of thinking about all this, at least for me, is that Telepath is building solutions to problems that were invented on Twitter.
Will it work? It’s too soon to tell. The company raised a seed round from First Round Capital, among others, and plans to expand its user base to about 4,000 people over the next few weeks. At that scale, a lot of things can “work” without ever growing into a viable company.
But even if Telepath doesn’t take off, it has given us a gift: a blueprint for approaching content moderation that is tough, constructive, and takes a point of view about how people ought to behave. I believe that this model can work in at least some cases, and over time we may learn that a similar model can work in lots of cases. I’m grateful that the Telepath team have been such good students of our information sphere and brought their smarts to bear on some of the internet’s trickiest challenges.
“We’re in a blessed position because we’ve been thinking about this from day one,” Henry told me.
Imagine what position the rest of us would be in if previous social networks have been thinking about this from day one, too.
Thanks to everyone who read and wrote in with their thoughts on “Mark in the Middle,” the long feature we published yesterday based on listening to a summer’s worth of audio recording from inside Facebook. As you might imagine, I heard from a lot of folks inside Facebook yesterday — some of whom who liked the piece, and others who pushed back.
One of the themes of the piece is that Facebook leadership is caught between competing forces in the United States: a liberal-leaning employee base versus a more conservative US population. In a subscribers-only post at Stratechery today, Ben Thompson frames it differently. He notes that that the average Facebook employee is to the left of the average Democrat — employees donated much more to Bernie Sanders than they did to Joe Biden — while the average user is closer to the political center. The dynamics I write about are the same, but I think that the fine-grained distinction he draws here is important.
Another theme in the piece was that the shift to private groups had raised some alarms among people working on it. After the story was posted, Facebook gave me a comment about that:
Last week, we announced that we are starting to remove health groups from recommendations, in addition to existing measures we take like removing groups that repeatedly share misinformation from these surfaces.
We limit the spread of groups tied to violence by removing them from recommendations and restricting them from Search. This includes QAnon, US-based militia organizations, anarchist groups, and a violent US-based anti-government network connected to the boogaloo movement.
A third issue folks raised with me is that whatever internal dissent I may have captured in my piece, worker retention is at or near record highs. (I’ll leave open the question of to what degree not wanting to change jobs during a pandemic may factor in that.)
Four, some folks took my conclusion to suggest I thought Mark Zuckerberg was indifferent to America’s democratic decline or the pandemic. I think the opposite is true, which is why I noted that Facebook is trying to register 4 million voters, to the consternation of the Trump campaign.
Finally, I don’t put much stock in what happens in the anonymous workplace chat app Blind, but this poll created by Microsoft workers and participated in by Facebook employees about the biggest risk facing the company made me laugh.
Thanks again for the readers who made this story possible. What big feature should I write next? You know where to find me.
Today in news that could affect public perception of the big tech platforms.
Trending up: YouTube will now show users who search for 2020 political candidates a panel with vetted information above the search results. The company is also launching additional information panels on voter registration in English and Spanish. (YouTube)
Trending down: Facebook allowed political advertisers to target misleading ads about Joe Biden to swing-state voters in Florida and Wisconsin, according to the activist group Avaaz. The news suggests the platform isn’t enforcing its own rules less than two months before Election Day. (Brian Fung / CNN)
There was so much election news today that we’re breaking it out into its own section.
⭐Facebook’s much-anticipated Oversight Board will launch ahead of the US election, after being criticized for a perceived lack of action. A spokesperson for the independent organization said it plans to start in mid- to late October, in a reversal from its previous statement. Here’s Sam Shead at CNBC:
A spokesperson for the Oversight Board said: “In terms of passing rulings around the time of the election, the Board will be prepared to consider cases on any matters that come before it and are in scope for us, and it’s premature to guess what the Board may or may not consider until we launch. Whether Facebook will send the Board expedited cases around this time is a question for Facebook.”
The social media company has been under pressure to demonstrate that it is ready to deal with what stands to be one of the most polarizing U.S. elections in recent history, with experts concerned that some of the platform’s users may try to incite violence.
Facebook said it will reject ads from President Trump and Joe Biden that prematurely claim victory on Election Night. The company already banned new political ads the week before the election. (Mark Sullivan / Fast Company)
Facebook groups are the biggest vulnerability the platform has, leading into the 2020 election, this piece argues. Experts worry they’ll be used to organize violence on Election Day, much like in Kenosha, Wisconsin. (Pema Levy / Mother Jones)
Election officials are using Facebook’s CrowdTangle to find and report voting misinformation — but a new report from the Tech Transparency Project says it doesn’t effectively monitor most posts on the platform. (Kurt Wagner / Bloomberg)
Russian trolls are amplifying Trump’s tweets to influence the 2020 US election. It’s an easier job than they had in 2016, when they had to come up with their own content. (David E. Sanger and Zolan Kanno-Youngs / The New York Times)
US Election Day exercises simulating attacks aimed at disrupting the vote show officials will struggle to quickly counter misinformation spread on social media. (Christopher Bing / Reuters)
Email systems in county offices that handle multiple parts of the voting process are an overlooked vulnerability in the election. Although experts have warned local officials to follow best practices for computer security, many smaller locales have taken few precautions. (Jack Gillum, Jessica Huseman, Jeff Kao and Derek Willis / ProPublica)
⭐ Zoom canceled an event at San Francisco State University featuring Leila Khaled, a member of the Popular Front for the Liberation of Palestine who took part in two plane hijackings in 1969 and 1970. The webinar was cancelled after pressure from Israeli and Jewish lobby groups including the Lawfare Project. This feels like a wild new frontier in content moderation. James Vincent at The Verge has the story:
In addition to Zoom’s cancellation of the webinar, Facebook also took down an event page for the talk. A spokesperson for Facebook told J. The Jewish News of Northern California it had “removed this content for violating our policy prohibiting praise, support and representation for dangerous organizations and individuals, which applies to Pages, content and Events.”
J. reports that the talk began live-streaming on YouTube on Wednesday night but was taken down approximately 23 minutes in after Khaled began discussing the right of occupied peoples to fight their occupiers “by any means possible, including weapons.” A link to the removed talk says it was taken down for violating YouTube’s terms of service. YouTube confirmed to The Verge that it terminated the livestream, and that it did so because it breached the platform’s policies on criminal organizations. Specifically, it contained “content praising or justifying violent acts carried out by violent criminal or terrorist organizations.”
Four people are suing Facebook for its alleged role in enabling violence in Kenosha. The complaint says Facebook “empowered right wing militias to inflict extreme violence and deprive Plaintiffs and protestors of their rights.” (Craig Silverman and Ryan Mac / BuzzFeed)
Facebook rolled out new rules governing employee speech in Workplace, the company’s internal social network. One of the changes requires employees to use a photo of themselves as their profile pictures — rather than an image promoting a specific political candidate or cause. (Salvador Rodriguez / CNBC)
Alexander Nix, the former head of Cambridge Analytica, has been banned from running a company in Britain for seven years. The restrictions come in response to “potentially unethical” behavior linked to his position at the center of the Cambridge Analytica scandal. (Rob Davies / The Guardian)
The Senate Commerce Committee asked the CEOs of Google, Facebook, and Twitter to appear for testimony on October 1st . The committee may issue subpoenas if CEOs do not agree to appear by the end of the day today. (Russell Brandom / The Verge)
An unnamed US federal agency was hit with a cyber attack after a hacker used valid access credentials. It’s not yet clear what data was stolen or whether the hack was carried out by a foreign government. (Andrew Martin / Bloomberg)
The US Department of Justice submitted a proposal to weaken Section 230 of the Communications Decency Act. The proposal would remove immunity for tech platforms’ hosting material related to terrorism, child sex abuse, or cyber-stalking. (Adi Robertson / The Verge)
The details of the TikTok deal weren’t spelled out in a comprehensive contract that is typically seen in high-stakes mergers. The rushed negotiations and lack of clear terms led to big disagreements about who would control the 80 percent stake in TikTok Global. (Stephen Nellis, David Shepardson and Echo Wang / Reuters)
TikTok is showing Americans what it’s like to live in a world where flourishing online spaces are threatened by political fights between states and corporations. It’s a reality some people outside the United States have had to contend with for some time. (John Herrman / The New York Times)
TikTok is asking a federal judge to temporarily block Trump’s ban, set to take effect on September 27th. In a new legal filing, the company said Trump’s threats have already had a devastating impact on the company’s ability to land advertisers, hire new talent, and hold on to creators. (Issie Lapowsky / Protocol)
The court said the Trump administration must either delay the TikTok ban or file legal papers defending the decision by Friday. The ban would block Apple and Google from allowing new downloads of TikTok starting Sunday. (David Shepardson / Reuters)
Oracle’s proposed deal with TikTok would see the company “completely rebuild” the TikTok app, with Oracle rewriting millions of lines of code. Today, Oracle has little experience in building and maintaining software for consumers. (Alex Heath / The Information)
Chinese state media is blasting Oracle’s proposed deal with TikTok. “What the United States has done to TikTok is almost the same as a gangster forcing an unreasonable and unfair business deal on a legitimate company,” wrote one outlet. (Karen Leigh / Bloomberg)
Epic Games is teaming up with Spotify and Match Group to pressure Apple to make changes to its App Store rules. The newly formed Coalition for App Fairness said most app stores collect excessive commissions from developers and stifle competition by giving unfair advantages to their own products and services. (Sarah E. Needleman / The Wall Street Journal)
Apple has rejected over 150,000 apps in 2020 for violating the company’s privacy guidelines. The company reviews over over 100,000 apps and app updates every week. (William Gallagher / Apple Insider)
Twitter rolled out new security protocols in the wake of the July 15th hack that targeted verified accounts. The measures are meant to ensure the service doesn’t melt down on Election Day. (Nicholas Thompson and Brian Barrett / Wired)
QAnon was a massive problem on Reddit before it migrated across all parts of the internet, and into mainstream American culture. Now, the conspiracy theory has almost been eradicated from the site. But Reddit can’t explain why. (Kaitlyn Tiffany / The Atlantic)
54 percent of Americans say social media platforms shouldn’t allow political ads. (Pew Research Center)
Mark Kelly’s campaign launched the first Snapchat AR lens for a Senate race. The retired NASA astronaut is running in Arizona. (Makena Kelly / The Verge)
⭐ The TikTok deal could give Walmart a way to catch up to Amazon in e-commerce. Right now, the company says it will “provide our ecommerce, fulfillment, payments and other omnichannel services to TikTok Global.” That might mean helping TikTok incorporate shopping features into its app. Here’s more from Jason Del Rey at Recode:
That means TikTok users could buy merchandise created or promoted by their favorite TikTok artists without leaving the app, and TikTok could potentially use Walmart’s existing warehouse and “fulfillment” services to deliver the merchandise. At the moment, when an influencer or a consumer brand advertises a product on TikTok, users almost always have to click through to an external shopping website to make a purchase.
Sure, Walmart could provide incentives for TikTok or its most popular creators to link to Walmart.com in those instances, but a deeper integration into the app could be the longer-term vision. Last month, for example, TikTok experimented for the first time with allowing a popular creator to sell goods through a pop-up page within the app.
Amazon is launching a new environmental program called Climate Pledge Friendly that will label products that meet one of 19 certifications for sustainability. The goal is to help climate-conscious consumers make better shopping decisions. (Nick Statt / The Verge)
Twitter is rolling out a test to let people record and send audio-based direct messages. The move comes after the company rolled out audio tweets for iOS in June. (Chris Welch / The Verge)
Twitter is expanding the rollout of its prompt aimed at getting users to read content before they retweet it. Imagine that! In June, the company introduced the feature as a test on Android. Soon it will be available to all users. (Taylor Hatmaker / TechCrunch)
Instagram is rolling out changes to its TikTok competitor Reels after users critiqued its earlier design. Now, creators will be able to make longer videos, extend the timer, and edit and delete clips more easily. (Sarah Perez / TechCrunch)
Magic Leap founder Rony Abovitz had big aspirations for his augmented reality startup. But current and former employees say he became increasingly disconnected from the company’s reality. (Joshua Brustein and Ian King / Bloomberg)
Apple isn’t the only company charging developers exorbitant fees. Here’s a guide to platform fees on app stores, social platforms, and membership services. (Julia Alexander / The Verge)
Google said it will work on a hybrid work from home model, since most employees don’t want to come into the office everyday. “I see the future as being more flexible,” said CEO Sundar Pichai. (Jennifer Elias / CNBC)
Spectrum Labs raised $10 million in a Series A. The company builds algorithms to moderate, track, and flag harassment and hate speech, with the goal of stopping them altogether. (Ingrid Lunden / TechCrunch)
Spotify is full of SEO spammers who’ve chosen random names that users are likely to search for. “Artists” like Pro Sound Effects Library, On Hold Music, and Yoga are a few of the offenders. The number of signups I’ve been getting to this newsletter suggests to me that Spotify is waking up to just how many content moderation issues it faces. (Peter Slattery / OneZero)
Those good tweets
therapist: how have you been coping with everything
me: with sarcasm mostly
therapist: has that been working
me: yeah it’s been super great
— nash™ (@itsnashflynn) September 7, 2020
I’m not going to go to therapy I am going to forge a powerful sword
— going to hedonism II with hunter biden (@broom_error) September 17, 2020
i’ve got one foot in the darkness and the other one in a hello kitty roller skate
— matthew gray gubler (@GUBLERNATION) November 18, 2014