Published:
১১ জানুয়ারী ২০২৬, ১২:৫১
The growing use of deepfake and cheapfake content is fueling concern over electoral integrity in Bangladesh, as analysts and authorities warn that manipulated videos, images, and audio are increasingly being used to mislead voters and damage political opponents.
During Bangladesh’s 12th national parliamentary election on January 7, 2024, a fake video circulated on polling day showing an independent candidate from Gaibandha-1 appearing to announce his withdrawal from the race. The video was later identified as a deepfake, but not before it caused confusion among voters.
The incident mirrors a global trend. A 2024 report by Germany-based Konrad Adenauer Foundation found that deepfake-related election interference has occurred in multiple countries, including the United States, India, Indonesia, Turkey, Slovakia, Argentina, Poland, Taiwan, and France.
In the United States, a deepfake video circulated ahead of the New Hampshire Democratic primary last year appeared to show then-President Joe Biden discouraging voters from participating. Authorities later confirmed the content was fabricated.
Fact-checkers and digital rights analysts say video has emerged as the most effective tool for spreading political disinformation, particularly as elections approach. Monitoring data show that nearly two-thirds of verified misinformation cases in recent months involved video content, outpacing text and images.
Advances in artificial intelligence have made it easier to produce highly realistic fake videos, audio clips, and images—known as deepfakes—while cheaper software tools are used to create “cheapfakes,” which often involve misleading edits, fabricated captions, or imitation news graphics.
Analysts identified at least 10 commonly used tactics, including attaching misleading captions to genuine footage, selectively editing real statements to change their meaning, attributing fabricated quotes to public figures, and presenting old images or videos as current events. Coordinated campaigns often spread the same false narratives simultaneously across multiple accounts and platforms.
Monitoring organizations reported more than 1,400 instances of misinformation in the three months leading up to the election, with political content accounting for the majority. Most cases involved outright falsehoods rather than misinterpretations or partial truths.
At least a dozen senior political figures from different parties have been targeted by manipulated or fabricated content, according to fact-checkers.
Analysts say disinformation campaigns are driven by both ideological motives and financial incentives, often operating through networks of fake social media accounts known as “bot armies.” While identifying misleading accounts is relatively easy, tracing the individuals or organizations behind them requires formal investigation.
Law enforcement sources say misinformation has been spread both in support of and against contesting parties. They also allege that politically motivated groups not participating in the election have played a significant role in online disinformation campaigns.
Government agencies acknowledge the growing threat. The National Cyber Security Agency has formed a special task force to counter fake information and online rumors through the election period, coordinating with media bodies, regulators, and law enforcement agencies.
The Criminal Investigation Department’s Cyber Police Center says it is monitoring social media platforms around the clock and using forensic analysis to identify and counter disinformation.
Election officials have repeatedly warned that deepfakes and cyber manipulation pose serious risks to electoral credibility. However, experts say response measures remain slower than the pace at which false content spreads.
Disinformation often spreads faster than corrections, leading many voters to accept false narratives as fact. Analysts warn that the problem has reached a point where even political leaders and media outlets have, at times, relied on unverified social media content later proven to be false or AI-generated.
With artificial intelligence tools becoming more accessible and sophisticated, experts say the challenge of protecting elections from digital manipulation is likely to intensify unless stronger safeguards, digital literacy, and rapid-response mechanisms are put in place
Comment: