Tri Network

Main Menu

  • Home
  • Economic growth
  • Corporate restructuring
  • Confirmation Bias
  • Bank Apr Uk
  • Financial Affairs

Tri Network

Header Banner

Tri Network

  • Home
  • Economic growth
  • Corporate restructuring
  • Confirmation Bias
  • Bank Apr Uk
  • Financial Affairs
Confirmation Bias
Home›Confirmation Bias›Elon Musk gets Twitter content rules wrong

Elon Musk gets Twitter content rules wrong

By Laura Wirth
May 10, 2022
1
0

Unfortunately, Twitter and other platforms often apply their policies inconsistently, so it’s easy to find examples supporting one conspiracy theory or another. A review by New York University’s Center for Business and Human Rights found no reliable evidence to support the allegation of anti-conservative bias by social media companies, even calling the allegation her -even in the form of misinformation.

A more direct assessment of political bias by Twitter is difficult due to the complex interactions between people and algorithms. People, of course, have political biases. For example, our experiments with political social bots found that Republican users are more likely to mistake conservative bots for humans, while Democratic users are more likely to mistake conservative human users for bots.

To remove human bias from the equation of our experiences, we deployed a group of benign social bots on Twitter. Each of these bots started by following a news source, with some bots following a liberal source and some a conservative source. After this initial friend, all bots were left alone to “drift” through the information ecosystem for a few months. They might gain followers. They acted according to an identical algorithmic behavior. This included following or following random accounts, tweeting meaningless content, and retweeting or copying random posts to their feed.

But this behavior was politically neutral, without any understanding of the content seen or published. We tracked the bots to probe for political biases emerging from how Twitter works or how users interact.

Surprisingly, our research has provided evidence that Twitter has a conservative rather than a liberal bias. On average, accounts are drawn to the conservative side. Liberal accounts were exposed to moderate content, which shifted their experience to the political center, while interactions from right-wing accounts were skewed towards posting conservative content. Accounts that followed conservative news sources also received more politically aligned followers, fitting into denser echo chambers and gaining influence within these partisan communities.

These differences in experiences and actions can be attributed to user interactions and information conveyed by the social media platform. But we couldn’t directly examine possible bias in Twitter’s News Feed algorithm because the actual ranking of posts in the “home timeline” is not available to outside researchers.

Twitter researchers were, however, able to audit the effects of their ranking algorithm on political content, revealing that the political right enjoys higher amplification than the political left. Their experiment showed that in six of the seven countries studied, conservative politicians enjoy greater algorithmic amplification than liberals. They also found that algorithmic amplification favors right-wing news sources in the United States.

Our research and Twitter research shows that Musk’s apparent concern about bias on Twitter against the Conservatives is baseless.

Arbitrators or censors?

The other allegation Musk appears to be making is that excessive moderation stifles free speech on Twitter. The concept of a free market of ideas is rooted in John Milton’s age-old reasoning that truth prevails in a free and open exchange of ideas. This view is often cited as the basis of arguments against moderation: accurate, relevant, and timely information should emerge spontaneously from user interactions.

Unfortunately, several aspects of modern social media hinder the free market of ideas. Limited attention and confirmation bias increase vulnerability to misinformation. Commitment-based ranking can amplify noise and manipulation, and the structure of information networks can distort perceptions and be “manipulated” to favor one group.

As a result, social media users have in recent years been the victims of manipulation by astroturfing, trolling and misinformation. Abuse is facilitated by social bots and coordinated networks that create the appearance of human mobs.

We and other researchers have observed these inauthentic narratives amplifying misinformation, influencing elections, committing financial fraud, infiltrating vulnerable communities, and disrupting communication. Musk tweeted that he wanted defeat spambots and authenticate humansbut these are not easy or necessarily effective solutions.

Inauthentic accounts are used for malicious purposes beyond spam and are difficult to detect, especially when exploited by people in conjunction with software algorithms. And removing anonymity can hurt vulnerable groups. In recent years, Twitter has adopted policies and systems to moderate abuse by aggressively suspending accounts and networks displaying inauthentic coordinated behavior. A weakening of these moderation policies could make abuse rampant again.

Manipulate Twitter

Despite Twitter’s recent progress, integrity remains a challenge on the platform. Our lab discovers new types of sophisticated manipulations, which we will present at the AAAI International Web and Social Media Conference in June. Malicious users exploit so-called “follow trains” – groups of people who follow each other on Twitter – to rapidly grow their followers and create large, dense, hyperpartisan echo chambers that amplify toxic content from sources unreliable and conspiratorial.

Another effective malicious technique is to post and then strategically remove content that violates the platform’s terms once it has served its purpose. Even Twitter’s lofty limit of 2,400 tweets per day can be circumvented by deletions: we’ve identified numerous accounts that flood the network with tens of thousands of tweets per day.

We have also found coordinated networks that engage in repetitive likes and dislikes of content that is eventually removed, which can manipulate ranking algorithms. These techniques allow malicious users to inflate the popularity of content while evading detection.

Musk’s plans for Twitter are unlikely to do anything about these manipulative behaviors.

Content moderation and freedom of expression

Musk’s likely acquisition of Twitter raises fears that the social media platform could decrease moderation of its content. This body of research shows that stronger, not weaker, moderation of the information ecosystem is needed to combat harmful misinformation.

It also shows that weaker moderation policies would ironically harm free speech: the voices of real users would be drowned out by malicious users who manipulate Twitter through inauthentic accounts, bots and echo chambers.

Filippo Menczer is a professor of computer science and computer science at Indiana University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Related posts:

  1. Bans lowered to warnings for jockeys who blamed Newbury incident | Horse racing information
  2. Arsenal followers react to beginning XI towards Olympiacos
  3. Lorcan Williams and Web page Fuller canceled suspensions
  4. Williams and Fuller have suspensions canceled after finger-pointing episode

Categories

  • Bank Apr Uk
  • Confirmation Bias
  • Corporate restructuring
  • Economic growth
  • Financial Affairs
  • TERMS AND CONDITIONS
  • PRIVACY AND POLICY