Slow is better than wrong
Erin Kernohan-Berning
3/11/20264 min read
On February 28, the US and Isreal attacked Iran, kicking off a major war in the Middle East. Since then, a torrent of misattributed videos of missile attacks, fake images, and AI-generated slop have flooded the internet, particularly on social media. So, now is a good time to review our information literacy skills to help us determine what is real and what is fake.
Disinformation campaigns and wartime propaganda are nothing new, the tools have just evolved. While AI-generated content certainly adds a terrifying efficiency to those efforts, just concentrating on the question, “is this AI?” isn’t especially useful. This is good news, because it means that having an eye for AI isn’t necessary for discerning whether something depicts a real event or has been concocted to deceive.
Misleading information can be broken down into three categories. Misinformation is incorrect information that is shared without malicious intent – a misconception that winds up spreading. Malinformation is when something real is spread without context or with an incorrect context. A good example of this is when a compilation of flood videos made the rounds on social media last November attributed to Hurricane Melissa in Jamaica but were actually showing unrelated storms in other countries. Disinformation is incorrect information that is spread with the intent to deceive – this can be by people with financial or political interests, or for clout.
Journalist Justin Ling, whose work can be seen in the Toronto Star, Wired, and Foreign Policy Magazine, had some good advice on Bluesky when the first missiles were launched. He said, “it’s better to get the news slowly than to get it wrong.” Social media is extremely reactive – algorithms reward immediate content, hot takes, and very little (if any) fact checking. When you have the privilege of safety, avoiding minute-by-minute updates in favour of scheduling your news intake is a good strategy. Viewing the news once or twice daily gives time for details and analysis to emerge from what is inevitably a chaotic situation.
When you do see a video online that you are unsure of, look at what the source is. Jeremy Carrasco, a freelance video and audio engineer turned AI video debunker, suggests to first go to the account that posted the video. Do they look like a reliable source? Many accounts that post AI-generated videos are engagement farms, meaning they post a lot of similar looking content because it gets them clicks. That content may be AI generated or misattributed content meant to ride on the coat tails of the day’s biggest news. Disinformation has a financial incentive – more engagement means more advertising revenue for both the creator and the social media platform.
With AI content in particular there are no truly reliable “AI detectors” to run a video or image through to find out if it is indeed AI. Some tools like Nano Banana Pro or Sora do add a watermark or digital ID, but those only work for those tools. Beyond that, social media platforms rely on users disclosing their AI use, which someone intentionally trying to fool you isn’t going to bother doing. Running images through a reverse image search has been a method recommended by organizations like MediaSmarts and GetCyberSafe, however NewsGuard has recently reported that the AI generated summaries that appear at the top of those searches have started to reinforce the disinformation in those images.
One of the best ways to ensure a video or image is real is to see if it has been reported on by an established news outlet or fact checker. I usually check sources like CTV, CBC, or Reuters (among others). AFP Factcheck is another excellent source. Whether or not you agree with the editorial decisions of a particular outlet or how they frame their reporting, journalists have a long and practiced history of checking the veracity of video and images. These professionals take a great deal of time to geolocate content, verify the identity of who took the image or video, and confirm the event by analyzing footage from multiple angles. This doesn’t mean that they don’t make mistakes, but because they work to a set of standards and practices, those mistakes are at least corrected and disclosed when they occur.
We are all responsible for developing our own discernment when it comes to online fakery. The most important thing we can do is not share what we aren’t sure is real. If you see something online that you don’t have time to verify, just don’t share it. We can have differing opinions about the impact of something that we all understand to be real. But if a video is fake, it’s fake. If an image is fake, it’s fake. And fake is no foundation to build a shared reality on.
Learn more
BBC Verify: US-Israel war with Iran sees AI fakes and disinformation spread online 2026. Edited by Rob Corp (BBC) Last accessed 2026/03/11
How Riddance does research 2026. Jeremy Carrasco and Mason Broxham (Riddance.ai) Last accessed 2026/03/11
Iran-Israel news: How AI images are flooding social media 2026. Elianna Lev (CTV News) Last accessed 2026/03/11
Journalism ethics and standards (Wikipedia) Last accessed 2026/03/11
National Cyber Threat Assessment 2025-2026 2024. Government of Canada Last accessed 2026/03/11
Real attack or video game? Misinformation and war in the AI age 2026. Lucy Carter and Michael Workman (ABC News) Last accessed 2026/03/11
Video compilation of unrelated floods falsely linked to Hurricane Melissa in Jamaica 2025. Oluseyi Awojulugbe (AFP Fact Check) Last accessed 2026/03/11
Correction log
Nothing here yet.