These organizations make use of networks of fake accounts, bots, and seemingly legitimate websites in order to manipulate public opinion and influence major global events, such as political elections.
Where are They and Who Controls Them?
The main web farms have been identified in Russia, where they operate under the direct control of state agencies such as the FSB (Federal Security Service) and organizations like the Internet Research Agency, based in St. Petersburg. Other countries, such as Iran and China, have also been linked to similar disinformation activities, often through affiliated networks or parallel groups.
How Do They Operate?
These structures exploit anonymity and the ease of creating accounts on social media to spread false or misleading news. They use increasingly sophisticated techniques thanks to advancements in artificial intelligence, which allows them to quickly and credibly generate fake articles, images, and videos. In addition to social media, they create websites that mimic reliable news sources, increasing the apparent plausibility of fake news.
Main Targets
The main objectives of web farms are:
Political elections (mainly in the United States, but also in Europe and Israel)
Geopolitical issues such as the conflict in Ukraine or perceptions of NATO
Influential public figures (politicians, journalists, entrepreneurs) and their followers
The general public, exploiting biases and the tendency to share unverified content
The ultimate aim is to manipulate public debate, destabilize democratic systems, and favor the political or strategic interests of the countries controlling these web farms.
Preferred Platforms
Web farms use a combination of digital channels to maximize the spread of fake news, creating a “cascade effect” that makes it difficult to track and remove false information:
Telegram: Campaigns often start here due to low moderation and the possibility of creating anonymous channels and groups.
X (formerly Twitter): Viral spread through retweets and easy creation of fake accounts. Fake news is 70% more likely to be reshared compared to real news.
Facebook: Wide distribution through dedicated pages and groups.
YouTube: Viral videos crafted to spread fake news and conspiracy theories, especially in medical, scientific, and political fields.
TikTok: Emerging platform for reaching young audiences, leveraging the virality of short videos and difficulties in moderation.
AI-generated websites: Thousands of unreliable websites posing as legitimate sources.
Other platforms: Instagram, Truth Social, online forums, and Chinese platforms like Weibo and Tencent, used for targeted campaigns or to bypass restrictions.
Who Funds Web Farms?
Funding for web farms comes from a mix of state resources (especially in authoritarian countries like Russia), private companies linked to governments, and monetization mechanisms provided by global digital platforms.
Key players:
Internet Research Agency (IRA): Founded and funded by Yevgeny Prigozhin, an oligarch close to Putin, former head of the Wagner militia (who died in a plane “accident” on August 23, 2023), with a monthly budget of at least $1.25 million during the 2016 US elections (and likely the same for 2024), managed through the Concord Management and Consulting holding.
Concord Management and Consulting: Key company in managing and funding the IRA and other Russian troll farms, acting as an intermediary between government funds and disinformation operations.
State agencies and intelligence services: Strategies and campaigns are often approved and supervised by high-level officials from the Russian presidential administration and secret services.
Marketing companies and private operators: Some campaigns are entrusted to marketing agencies or apparently independent troll farms, often in Africa and Asia, but coordinated by governments or power groups.
Big Tech and programmatic advertising: Western digital platforms like Facebook and Google, through programmatic advertising, indirectly fund disinformation sites and web farms by allowing the monetization of fake news.
Russia vs Iran: Two Disinformation Models
Russian and Iranian disinformation campaigns share the goal of undermining trust in Western democratic institutions but differ in strategies, techniques, targets, and prevailing narratives.
How to Combat Fake News and Disinformation
To limit the damage caused by disinformation and fake news, an integrated approach involving citizens, institutions, businesses, digital platforms, and verification bodies is necessary. The most effective strategies include:
Education and Media Literacy
Critical education: Promote educational programs in schools and society to develop critical thinking, the ability to recognize reliable sources, and awareness of online manipulation mechanisms.
Awareness campaigns: Share information on how to recognize fake news and adopt behaviors that do not amplify disinformation.
Source Verification and Fact-Checking
Use of verification tools: Rely on independent fact-checking sites and organizations to check the truthfulness of news before sharing.
Collaboration between platforms and fact-checkers: Strengthen partnerships between social networks and fact-checkers to quickly identify and flag false content.
Regulation and Platform Responsibility
Anti-disinformation regulations: Enact laws that raise the costs for those spreading fake news, with sanctions and clear responsibilities for platforms hosting false content.
Codes of conduct: Apply and update codes of good practice (such as the European code) requiring transparency, rapid removal of false content, and tools to report disinformation.
Advanced Technologies and Artificial Intelligence
AI for monitoring: Use artificial intelligence to detect suspicious content, deepfakes, and fake account networks in real time, flagging risks for human experts to verify.
Automated verification tools: Integrate APIs and control systems into corporate and editorial processes to filter and validate information before publication.
User Responsibility
Reporting tools: Provide users with simple tools to flag deceptive content and easily access reputable sources.
Don’t share without verification: Encourage responsible behaviors, such as avoiding impulsive sharing of unverified news and consulting multiple sources.
International Collaboration and Transparency
Task forces and observatories: Create international task forces and observatories (like EDMO in the EU) to monitor phenomena, share data and strategies, and constantly update countermeasures.
Research data access: Ensure researchers have access to platform data to study and effectively counter disinformation.
Conclusion
Fighting disinformation requires a coordinated response combining education, technology, regulation, collaboration between public and private actors, and individual responsibility.
Only in this way can the impact of fake news be reduced and the health of public debate and democratic institutions be protected.
Share this post