Vivers Olsek is Managing Director of Logically Fact, an Ireland-based global fact-checking organisation and signatory to the IFCN Code of Principles. From 2019 to 2022, he was Director of the International Fact-Checking Network.
Misinformation is a major threat in today’s digital age, especially during this global election year. While social media platforms serve as important conduits of information, they have also become breeding grounds for false and harmful content.
As fact checkers, we are tasked with sifting through this clutter, identifying falsehoods, and bringing quality information to the public. But the sheer volume and complexity of multimodal content – video, images, text – makes this task increasingly difficult, and our current models of engagement with platforms don’t make it any easier.
The rise of generative AI has further compounded these challenges, enabling the creation of false content on an unprecedented scale. AI-generated videos and images can be indistinguishable from the real thing, making it even harder for fact checkers to identify and debunk falsehoods. Advances in this technology are outpacing traditional fact-checking methods.
This is happening as some platforms scale back their commitment to honesty and authenticity and others struggle to get started. Critics of fact-checking have tried to discredit it, and even some impartial observers have said that our efforts have failed. Some of them have a point.
Despite platforms promising to fight disinformation and support fact-checking efforts, many efforts fall short in several key areas.
1. Lack of focus and arbitrary criteria: Unpredictable and unreliable moderation queues and a lack of technology solutions for fact-checkers mean that the focus of fact-checking and platform collaboration is often misguided; instead of addressing truly harmful misinformation, the focus is on less significant content. Some platform-fact-checker collaborations have gotten off to a useful start, but could be significantly improved through more timely interventions focused on potential harm and virality. Such prioritization could also lead to partnerships that can be scaled or scaled based on current and emerging risks in a given information environment.
2. Lack of transparency: Social media companies frequently do not release important data about the volume and spread of misinformation on their platforms and limit access to content needed for investigative and intervention purposes, with some platforms completely blocking access to all content and data. This lack of access and transparency means that fact checkers cannot see the full extent of the problem and cannot effectively target the most dangerous content.
This problem creates an urgent need for standardized metrics and benchmarks across platforms, which is essential to promote consistent and reliable assessment of misinformation. This approach should align with local regulations, encourage platforms to adopt transparent practices, and foster a more effective response to the global misinformation challenge.
3. Insufficient support for fact-checkers: Platforms and the fact-checking community must work together to build the right tools and applicable technologies, including sufficient resources to keep fact checkers up to date on the evolving dynamics of online content and the tactics of those spreading misinformation. Without access to advanced technology and data, fact checkers will be at a significant disadvantage in detecting and analyzing the scale of AI-generated multimodal content.
4. Disappearing surveillance tools: A number of monitoring tools that once played a key role in fact checkers’ discovery process have been eliminated or limited. These tools were essential for tracking misinformation trends and identifying emerging falsehoods. With these tools gone, fact checkers have fewer resources to effectively monitor and counter the spread of misinformation.
Fact checkers are the vanguard of the fight against misinformation. Our expertise, honed through years of scrutinizing and verifying information, is essential to distinguish fact from fiction. As generative AI evolves, fact checkers must maintain their expertise and understanding of local context. AI can help speed up the process, but it can’t replace the nuanced understanding and critical thinking that fact checkers bring.
Current platform capabilities are primarily focused on on-platform analysis and mitigation. As a result, platforms are often taken by surprise when new narratives, tactics, techniques, and procedures — known in the cybersecurity world as TTPs — emerge and content spreads out of nowhere. Having a cross-platform perspective and collaborative infrastructure across major and emerging platforms would allow for earlier detection of misinformation trends and a more coordinated response.
From my experience as director of the International Fact-Checking Network, I have witnessed the challenges and triumphs of our community. Fact checkers are on the front lines of this battle and must leverage every tool they have to best contribute to trust and safety efforts. We must push our platform partners and newly formed trust and safety teams at AI companies to take more meaningful action and insist on more robust policies and increased transparency about the sheer volume of issues. To effectively counter misinformation, we need technologies that enable rapid and accurate analysis and verification of content. These technologies must enable near real-time responses to misinformation. This capability is essential to stay ahead of the constantly evolving strategies used by purveyors of misinformation.
Our goal is to identify and correct the falsehoods that are most likely to cause harm and spread rapidly, and to foster a more informed and resilient information ecosystem. This requires a collaborative effort to share insights, strategies and innovations that strengthen our capacity and increase our impact. The upcoming Global Facts conference is an important opportunity to further this dialogue and provides a platform for us to work together and strengthen our resolve.
The progress made over the past few years has created early signs of success and many opportunities to iterate and learn. Don’t get me wrong: we are losing the war against misinformation and disinformation, but we’ve also won important battles. Learning from these fights and scaling collaboration across platforms and fact-checkers is our opportunity to turn the tide.
By working together, we can bring about positive change on social media platforms, especially those that have yet to enjoy the value of a fact-checking community, ensuring that our efforts are easily adopted and targeted at content that has the greatest potential for harm and spread. Together, we can create a more informed and resilient society, able to withstand the onslaught of misinformation and face adversity more proactively.