I read Elon Musk’s “Twitter Files” so you don’t have to – here’s what they really show | Elon Musk

Tthe threat model for a social network is complex. Your security team must deal with conventional hacking attacks, as hostile actors look for technical flaws in your apps and servers that they can use to extract valuable private data, inject malicious code, or simply wreak havoc for fun.

They also have to deal with people using the site’s own capabilities in destructive ways, from simple-minded spambots to nation-states engaging in “coordinated inauthentic behavior”. They have to protect users from account takeovers due to password theft, and they have to do it all while navigating the minefield that is content moderation.

And then the site is bought on a whim by a capricious billionaire and the threat comes from the house.

What is The Twitter files?

Elon Musk has pushed the “Twitter Files,” a series of Twitter threads from friendly journalists who use material provided by the company to rehash the company’s role in the earlier culture wars.

Usually big news stories that claim to be [something] “files” are based on enormous leaks, and provide a previously impossible look at the organization’s inner workings under the microscope. It is less typical for a huge leak to have been ordered by the company’s CEO, and executed by their subordinates who openly work with the journalists reporting on the story. But little about Elon Musk’s Twitter is typical.

What about the files themselves? After a week and a half, there have been four releases, from three authors: Matt Taibbi, Bari Weiss, and Michael Shellenberger, all largely part of a wave of “postliberal” newsletter writers from Substack. It is unclear how they were selected to receive the documents.

One requirement, however, was that everything they published be shared on Twitter itself, Taibbi has said, but beyond that, “we’ve been encouraged to look not only at historical Twitter, but also the current iteration. I was simply told I could write whatever I wanted, including anything about the current company and its new boss, Elon Musk. At the same time, the reporting was done inside Twitter’s offices, with assistance from Twitter employees.

See also  Tired of LastPass? How to find a better password manager.

And all three have focused on the areas one might guess, given their previous statements about the social network. Taibbi’s first thread covered Twitter’s attempt to respond to the New York Post story about Hunter Biden’s laptop; his other thread, as well as Shellenberger’s, looked at the events surrounding the suspension of Donald Trump and the January 6 attack on the US Capitol. Weiss, meanwhile, reported on what she described as “Twitter’s secret blacklists”.‌

Freedom of speech vs “freedom to reach”

So most of the material so far has been focused on what are actually two extremely high individual moderation decisions, one deemed a mistake in retrospect (hiding stories about Biden’s laptop), and the other just as divisive as any. would have guessed beforehand (Trump’s ban).

The documents shared by Taibbi and Shellenberger largely support this reading. The excerpts of internal emails and chat messages that they have released, without the conspiratorial framing of the two authors, appear to show employees coping with the enormous burden that has been placed on them with a crude mixture of panic and determination.

In the days following the publication of the Post’s story about Hunter Biden, it was not enough. Clearly aware of the prospect of a repeat of 2016’s WikiLeaks dump of hacked Democratic Party documents, the leaders cited by Taibbi are moving quickly to discuss enforcing a policy against sharing hacked material. But the material was not hacked, and while the chain of custody of Biden’s laptop remains unclear, it quickly became clear that the policy was being misused. Twitter’s top executives were too slow to lift the ban when it became clear, and, in Taibbi’s words“err on the side of … continues to err”.

See also  10 Common Habits That Are Secretly Sabotaging Your Success

Just two months later, the same group was convened to discuss Donald Trump. The president had used social media to instigate a protest in Washington DC that turned violent when his supporters decided to storm the US Capitol building. In the days after the election, Twitter had aggressively applied its “newsworthiness” policy, slapping a label on posts that would be deleted if not for the writer’s prominence, but by January 7 it was clear that it was unsatisfactory in the case of Trump .

A series of Slack posts shared by Shellenberger show the team, led by former trust and security chief Yoel Roth, desperately trying to invent policy on the fly (all employees except Roth are anonymized in the posts Shellenberger shared). Trump had been given special treatment, lingering on the site for months after a typical user would have seen his account deleted: at what point does this approach cease to be viable? The answer was quite clear on January 6. But if you give someone special treatment without admitting it, it only makes it all the more difficult to take it away.

Weiss’s part of the saga is different. Instead of focusing on the narrow world of American electoral politics, her thread takes a more systemic look at Twitter’s moderation practices. Working with Ella Irwina trust and safety officer at Twitter, Weiss published screenshots of the moderation pages for some of the site’s most notorious users.

Jay Bhattacharya, a Covid skeptic, was placed on a “trend blacklist”; Dan Bongino, a right-wing media personality, on a “search blacklist”; Charlie Kirk, whose decision to attend a protest wearing a nappy caused such embarrassment that it effectively destroyed the republican youth movement he founded, was set to “not enhance”.

See also  Dropbox 'Hacker' Didn't Steal Passwords or Data from 700 Million Users

The tags are various examples of what Twitter calls “visibility filtering”, a form of moderation intended to affect “freedom of reach” without affecting “freedom of expression”. Filtered users can post what they like, but their involvement in the algorithmic amplification of the site is limited. Some will not appear in search results or trends; others will not be recommended for users to follow. The most aggressive form of visibility filtering, which didn’t affect any of the prominent accounts Weiss highlighted, means that new posts aren’t even shown to followers, and are only visible to people who navigate directly to the poster’s profile.

Not all users Weiss surveyed were harmed by the moderation. One, LibsofTikTok, was given special treatment, warning moderators not to do anything about it without consulting the site’s senior team. Despite that, it had still received two strikes for abuse, and had been placed on the trending blacklist.

What did we learn?

I think it’s important to distinguish between the Twitter files and the “Twitter Files”. The latter, a large, hyped, coordinated publication, has so far failed to achieve its ostensible goals. The average of the whole exercise is that Twitter is a hotbed of left-wing bias, explicitly aligned with the US Democratic Party, and takes unjustified action to censor speech for politically motivated purposes.

The posts themselves show little of that kind. Some, like Weiss, don’t even try: individual examples of right-wing users who are on the receiving end of light-touch moderation say little about general bias. Did left-leaning users also get visibility filters? Weiss doesn’t say that. Did right-wing users get more filters? Weiss doesn’t say that.

Others show almost the opposite. There were plenty of simple reasons to remove Donald Trump from the social network in January 2021, but the posts seem to reveal Twitter staff methodically working through their actual rulebook, trying to understand how to respond to unprecedented events in a way which doesn’t just throw precedent out the window.

As with so much in American politics, the files fall flat if you view the American right as an outlier. If you have rules against misinformation about elections and only one party participates in a systematic campaign of misinformation about elections, it is not an unreasonable result for one party to be the focus of moderation efforts.

But the small letters, the documents themselves, are nevertheless an interesting historical object. They show that during periods of global crisis, those who made the decisions on Twitter were very aware of, and uncomfortable with, the power they had. Even as a set of selected examples, they show that the effort to create and use a consistent rulebook was driven as much by a desire to avoid criticism as by a belief that it was important to protect users. They give us an insight into what kind of discussions were probably taking place on Facebook and YouTube at the same time.

And they show that we should never trust Elon Musk.

Insider threat

Musk has promoted the series as an exercise in “transparency,” and if you’re Weiss, Taibbi, or Shellenberger, that’s what it is. But that’s the kind of transparency companies get when their database is hacked and sold on the darknet. In this case, the database cost $44 billion, and came with control of the website to boot.

Marcus Hutchins, the ethical hacker who stopped the WannaCry ransomware infection, posted on Mastodon about the documents. “As a security professional, not much scares me,” he said. “I’ve seen my personal data stolen multiple times, seen national hackers spray zero days across the internet, and I’m a shameless TikTok user.

“But now you have someone sitting on top of the personal data of billions of users, someone with a long track record of vindictive harassment, someone with an ear to the far right, and someone who has just shown us his willingness to weaponize internal corporate data to score political points. It scares me a lot.”

The Shellenberger posts named only one person: Twitter’s former head of trust and safety Yael Roth. When Musk bought the company, Roth was initially forthcoming: one of the few employees willing to speak up for his boss publicly, and a much-needed source of internal expertise after the immediate firing of Vijaya Gadde, the longtime head of Twitter’s platform security efforts.

But the relationship clearly soured. On November 10, Roth quit, reappearing a week later to write a New York Times op-ed arguing that “although he criticizes the capriciousness of platform politics, [Musk] perpetuates the same lack of legitimacy through its impulsive changes and tweet-length statements about Twitter’s rules.”

In doing so, he appears to have become a bete noire for his brief boss, and then for the wider right-wing media ecosystem that Musk now leads. The day before Shellenberger shared his share of the Twitter files, Musk posted an out-of-context excerpt of Roth’s decades-old doctoral dissertation, which looked at whether services like Grindr caused harm by forcing teenagers to pretend they were adults to access dating sites.

To an audience of hundreds of millions, Musk accused Roth of being “for children to be able to access adult Internet services,” and indirectly accused him of personally deciding to make Twitter a safe place for pedophiles.

The accusation is nonsense, but the accusation, in an atmosphere of right-wing panic over “groomers” in the media, is life-changing. On Monday, Roth and his partner were forced to flee their home after a sharp increase in credible threats against him. For insufficient loyalty, he must spend the rest of his life checking over his shoulder. What will happen to the next person who annoys Elon?

The wider TechScape

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *