Can Social Media Combat Misinformation and Still Grow?
ANALYSIS Real ClearPolitics
As social media platforms increasingly emphasize content moderation, “healthy conversations,” and combatting misinformation, the question of how such efforts will affect growth has been largely overlooked. If these changes significantly reduce or restrain growth, do they represent a conflict with platforms’ fiduciary duties to their shareholders?
The timeline below shows the estimated total daily tweets and tweeting users over the past decade. Twitter’s growth peaked in mid-2013, declined through late 2015, then entered a period of stagnation. It was the COVID-19 pandemic that rejuvenated the platform, leading to a vertical surge in both posts and users, but then the platform gave up all its pandemic gains in a single day: Oct. 21, 2020. Two months later, on Dec. 17, it stopped declining, but was back where it was a decade ago in terms of size.
These dates each have key significance in Twitter’s journey toward “healthy conversations” and combatting misinformation.
In March 2018, Twitter cofounder Jack Dorsey unveiled a company initiative around encouraging “more healthy debate, conversations, and critical thinking” on its platform. Two months later, the company announced early results from efforts to combat users who “distort the conversation.” Using AI-based filtering, the company tested silentlyreducing the visibility of selected tweets, which it claimed had resulted in a 4%-8% drop in abuse reports. This was followed in July by the release of a preliminary set of metrics, through which the company would assess the “health” of its platform, one of which was the degree to which the platform was an “echo chamber.”
Throughout 2018, Twitter emphasized its efforts to algorithmically reshape its platform to reduce abuse and make it more welcoming to more users. Twitter’s continued stagnation was described as the positiveoutcome of removing abusive and spam-like accounts.
Despite these efforts, little seems to have changed at a platform level. Daily tweets and tweeting users did not measurably increase, despite the reduction in abusive content. The percentage of tweets driven by verified users remained unchanged, while replies (seen below in blue) and mentions of other users (in gray) continued unchanged on their previous trajectories.
The only major change was that retweeting (seen above in orange), which had been growing linearly, leveled off in July 2018 and has remained unchanged, at just over half of all tweets. This is the same month that the company announced its emphasis on reducing echo chambers, and it occurred in a period of broader societal reckoning over the dangers of retweets. At the same time, if the company’s goal was to reduce echo chambers, one would expect to see a reduction rather than stabilization of retweets, especially of verified users.
The leveling off of retweets could alternatively reflect efforts to combat bot accounts. According to the company, during the 2016 presidential campaign, just 50,000 Russian bots accounted for more than 1% of all election-related activist tweets on the entire platform, retweeting Donald Trump alone more than 500,000 times. With retweeting a common method for bot-based amplification, the company’s culling of prolific bot accounts might explain this leveling off.
Lending support to this idea is the graph below, which traces the average (blue) and median (orange) account age – the length of time since the account was created, not the age of the user behind the account – of all Twitter accounts that posted each day. This increased linearly from 2012 through July 2018, suggesting an aging community of early adopters, with few new entrants. Yet, both median and average account ages leveled off in July 2018 at exactly the same time as retweets, suggesting a connection.
A flat median account age implies heavy churn: New users join, tweet for a period, then leave, in a steady and perfectly matched influx and outflux of users. The divergence with average account age over this same period suggests that Twitter bifurcated in 2018 into two communities: early adopters who have remained on the platform, and a constant stream of new users who leave at a relatively steady rate. This further supports the conclusion that this development was related to anti-bot efforts, in that bot accounts tend to retweet heavily. If Twitter were more aggressively deleting them, we should see a steady stream of new accounts being registered to replace them, with a net neutral effect on total tweet volume.
In contrast, one of Twitter’s signature efforts to reduce misinformation had an existential impact on the platform’s growth. On Oct. 20, 2020, Twitter announced that it was adding “friction” to retweeting in the hopes that doing so would reduce the spread of election-related falsehoods. The impact was immediate, leading to a 20% reduction in retweeting, forcing the company to roll back the move just two months later, on Dec. 16 – but the damage was done and the platform gave up all of its pandemic growth.
Given the ad-supported nature of today’s social media platforms (89% of Twitter’s revenue comes from ads), growth must be weighed against the demands of advertisers. Though efforts to combat abuse and falsehoods appear to have a negligible or even negative impact on growth, they might make advertisers feel more comfortable and thus be crucial for monetization. After all, in 2020, more than 1,000 advertisers boycottedFacebook in protest of abusive content on its platform. However, that boycott occurred against the backdrop of an economic collapse in which most companies were already significantly reducing their ad buys; as the economy began to recover, most of those companies quietly returned to Facebook, suggesting that advertisers are willing to look past abusive content on social platforms to reach their customers.
In the end, it appears that Twitter’s “healthy conversation” efforts have had no major impact on the platform’s growth, despite repeated assertions that they would greatly improve growth by opening the platform to new communities. Its signature election-misinformation effort backfired spectacularly, reverting the company to where it was a decade ago. This raises serious questions about the degree to which misinformation and moderation efforts affect platform growth and, in turn, how compatible they are with platforms’ fiduciary duties to their shareholders as publicly traded companies. To answer these questions, we need far more transparency with regard to the social media platforms’ inner workings.
RealClear Media Fellow Kalev Leetaru is a senior fellow at the George Washington University Center for Cyber & Homeland Security. His past roles include fellow in residence at Georgetown University’s Edmund A. Walsh School of Foreign Service and member of the World Economic Forum’s Global Agenda Council on the Future of Government.