As I was growing up and developing my political consciousness, there was one thing I repeated ad nauseam. That means you can’t trust politicians. Across party lines and beliefs, it has been a widely acknowledged truth that those vying for power manipulate information, make false promises, smear opponents, and engage in corrupt information practices. As the election approaches, various sources continued to call on us to be wary of rumors, gossip, and unsubstantiated information spreading unchecked and agitating voters towards certain decisions.
Given that this is the accepted state of affairs, we have to wonder why the introduction of deepfakes in the ongoing election cycle is causing so much anxiety. Some answers are obvious. Although we may be accustomed to encountering falsified information, we had an innate trust in our ability to sift out the truth. We had faith that we could see through manipulation and had access to alternative sources of information that could verify and corroborate uncertain information. We also trusted media and regulatory institutions to check, contain, and confirm, to varying degrees, the veracity of the information that reached us. We were media savvy and could tell when things were changed, edited, or revised. Lies and fakes are part of the information ecosystem, but we also have confidence in the tools, strategies, and collective experience we have to investigate and assess the truth of these messages. I was there.
The biggest change brought about by deepfakes is not about the nature of the information, but about our ability to trust our own judgments about this information. When faced with deepfakes, it is important to note that we are not simply being fooled. That would have been easier to deal with. Because even if we are fooled, we will find information, data, science, and interventions to provide reliable evidence to verify the information. They would have provided technological solutions and apps to detect potential fakes and reveal their true identity. Community intervention where people give context, challenge information, verify it and fact-check it – what we’re already doing – would even out the situation. The current state of algorithmic detectors would have sped up the fact-checking process.
Deepfakes often focus on managing the creation, distribution, and receipt of this information. But real attention must be directed to the anomalous state to which we have become naturalized over time, a state in which we cannot trust our own decisions about whether something is true or not. Even with all the testing and answers, the question remains: “Can I trust the analysis of this information?” Deepfakes are clearly advanced technological wizardry that allows us to claim the unreal as real. But what is different now from the old history of information falsification is that we have lost confidence that what we believe is true.
As a result of the emergence of social media as the default platform for consuming information over the past few decades, two things have happened that have challenged our confidence that we can believe it when we see it. One is context collapse. It is important to note that we trust information not only because of its content, but also because of the context in which we receive it. This is a dialogue. It’s a relationship. What a friend tells you is more reliable than what a stranger shares with you. Someone who is a certified expert on something may be more trustworthy than someone who expresses their own opinion. But the era of expertise has collapsed with the flattening of social media interfaces. We consume everything through the same interface, paying little attention to the source. Even if you know the source of the information, you don’t know if the information was analyzed by them or if it’s simply being cherry-picked and relayed to you by digital engagement algorithms. When the context of information is disrupted, we don’t know how to trust it, and our belief in our ability to tell the difference between fake and non-fake is challenged.
The second thing that has become normalized on digital platforms is information overload. We are so saturated with so much information that we abandon finding it of our own volition. Information is given to us. If you look for sources that support it, they will also provide you with those sources. We do not control the source or context of media consumption. We have entrusted this to algorithms that curate, manipulate, shape, and distribute information based on preset logics of profit and engagement. Information overload also means that we consume information quickly and at a rate that makes thoughtful engagement difficult. Things pass by in the blink of an eye as we scroll, and we work primarily on intuition and fragmented impressions, trying to figure out what the meaning of the information is through headings and information flow. It relies on vested algorithms that you have already defined.
So if India’s electoral process is concerned about deepfakes, it needs to realize that deepfakes can be controlled as long as it is clear what is real, what is fake, and what is really fake. . No amount of regulation will stop the circulation of deepfakes as long as we continue to accept context collapse and information overload as the default mode of social media and digital engagement. Because we now live in an era where judgment has been suspended, and anything can potentially be fake as long as it deceives us.
The author is Professor of Global Media at the Chinese University of Hong Kong and a teaching assistant at the Berkman Klein Center for Internet and Society at Harvard University.
© Indian Express Private Limited
Originally uploaded to: November 5, 2024 11:31 IST