Whilst social media platforms offer a wealth of information, communication possibilities, and entertainment, inaccurate and deceptive content is a persistent problem of online networks. The posting and sharing of misleading or false information, known as misinformation, is very easily done, and difficult to reverse or control. With misinformation spreading at the click of a button, tech companies are struggling to regulate it on social media, grappling with public responsibility, definitions of free speech, and identification of such content.
In recent years, misinformation, disinformation, and fake news have mostly involved topics of politics, health, immigration, and climate change. As of the beginning of 2022, news consumers in Latin America, Asia, North America, and Europe had seen the most false or misleading information surrounding COVID-19. Overall, social media has become a source of COVID-19 news for many Americans. In 2021 almost a fifth of U.S. adults reported getting a lot of their news about COVID-19 vaccines from social media.
Meta’s Instagram, popular with influencers and lifestyle content, has become a common platform through which misinformation spreads. Between July and August 2020, during the early months of COVID-19 and subsequent lockdowns, Instagram saw an increase of over 620 thousand followers of anti-vaccination accounts. Although social media giants Facebook and YouTube also continue to see a growth of anti-vaccination accounts, Instagram saw the biggest shift during the measured period. Additionally, coronavirus posts made up 57.7 percent of recommended misinformation on Instagram. A further 21.2 percent were about vaccines.
Social media platform usage differs between users who are vaccinated or intend to be and those who do not intend to be vaccinated. As of August 2021, social media users in the U.S. who did not intend to be vaccinated were slightly more likely to use Facebook, and much more likely to use YouTube than those who were vaccinated or intended to be vaccinated.
In the weeks following Russia’s invasion of Ukraine on February 24, 2022, social media companies started banning content from Russian state-controlled media outlets in the United States. According to a survey conducted in the U.S. in March 2022, half of all respondents strongly supported social media firms banning content from Russian state-controlled media outlets. Differences in support for this ban vary across voting history, with 60 percent of respondents who voted for President Biden in the 2020 election strongly supporting this ban, compared with 41 percent of former 2020 Trump voters.
Reversely, in Russia, 48 percent of respondents aged 18 to 24 years said they did not at all support the banning of Facebook and Instagram in the country after Meta was proclaimed an extremist organization. Russian support for this decision was highest amongst those aged 40 to 54 years.
Social media companies have two ways of combatting misinformation: fact-checkers, and algorithms, both of which have their pros and cons. The efficiency of algorithms makes for a useful tool for fact-checkers because of the sheer amount of information they can cover but they are limited in their assessment of content. A 2021 survey conducted in the United States found that 38 percent of respondents thought it was a good idea to let social media companies use algorithms to find false information. Additionally, a quarter of respondents said that social media-owned AI was able to find false information better than humans.
The topic of misinformation also brings up debates around who is responsible for controlling the spreading of false content. Does the task of managing misinformation fall to social media users, companies, or government agencies? According to a 2021 U.S. survey, 51 percent of respondents said that social media companies should play a major role in setting standards on how these programs should be used.
Although misinformation on social media can seem like a problem without a clear solution, the subject is widely acknowledged by experts in the tech industry as a highly concerning one. The tackling of misleading and false content online is actively receiving attention and action from professionals and policymakers. In Europe, the Digital Services Act, aimed at creating safer online environments, intends to hold large platforms accountable for harmful content, as well as carry out risk assessments, independent audits, and overall transparency regarding the usage of algorithms. In the UK, a proposed Online Safety Bill is calling for more consideration towards users from online platforms and aims to target trolling, harmful content, and internet fraud, among other pressing online issues.
Fortunately, a global survey from 2021 found that 61 percent of tech experts said that digital spaces and our uses of them will change in the coming years, in a way that will serve the public well.
This text provides general information. Statista assumes no liability for the information given being complete or correct. Due to varying update cycles, statistics can display more up-to-date data than referenced in the text.