Is this fake?
Erin Kernohan-Berning
5/28/20254 min read
I was scrolling on Instagram when I saw a reel in which a polar bear was walking through the suburbs. The shaky video showed the bear walk between two parked cars over a curb and onto a well-manicured lawn. The video cut to a man holding an open Jumanji board game, sheepishly closing the lid, clearly claiming a mea culpa for summoning such a large beast to his neighbourhood. The video was clearly a joke, and all in good fun for those who are fans of the Jumanji movie franchise. The video of the polar bear was also entirely fake, generated by an AI video generator.
The comments on the video largely focused on the joke itself – cracking wise about the foolishness of playing magical board games that will only invite trouble during already troubling times. However, a few comments earnestly pointed out the dangers of polar bears and how the non-existent camera operator was in peril, clearly taken in by the AI fakery. Even fewer comments keyed-in on the video being AI generated, pointing out giveaways like poorly rendered curbs and vehicles that were out of perspective.
Besides video creation, creators can use AI-powered facial filters that reconstruct their faces. A few years ago, a glamour filter on TikTok that adorned the user’s face with makeup and contouring sparked controversy for its potential to reinforce unattainable beauty standards. Today, there are very convincing filters for all sorts of purposes. One creator, likely in her 20s, dispenses humorous quips about life while using a granny filter that makes her quite convincingly appear as if she is in her 80s. Another creator recounts episodes from her childhood using a filter that resculpts her face to make her appear as a 6 year old. In both of these cases filters are used for comedic purposes, the audience is aware of the fakery and in on the joke.
AI video tools are also used to spread disinformation and scams. Social media platforms are riddled with deepfake videos that are disseminated through ad networks, featuring AI-generated images and video of well-known public figures. Most recently, a faked video of Prime Minister Mark Carney extolling the virtues of cryptocurrency has been making the rounds. Other fake ads have purloined the likenesses of Justin Trudeau, Pierre Poilievre, and Jagmeet Singh – demonstrating that chicanery is truly non-partisan. Often these ads link to cryptocurrency scam websites where, for the promise of risk-free riches, unwitting victims hand over their cash and payment details.
Generative AI has ushered in an era of extremely easy and accessible fakery. Past hoaxes like grainy doctored photos of the Loch Ness monster, UFOs that were just hubcaps suspended from fishing line, and even Photoshopped images of the early naughts took skill and time to be at all effective. Now, with nothing more than an idea and a prompt, anyone can create something that isn’t real, but that looks real. With a handful of keystrokes, the likeness of someone can be used without their consent to create a video of “them” perpetuating a scam or disinformation, or worse.
Legislation is slow to catch up with technology. Carney pledged during the election to criminalize the creation of non-consensual pornographic images, which has been a growing problem. Scam ads using deepfakes are against most sites’ terms of service, however because they are delivered by vast and nebulous ad networks, they often reach thousands of users before being taken down – if they are taken down at all. Actress Jamie Lee Curtis had to publicly tag Meta CEO Mark Zuckerberg’s Instagram account asking to have deepfakes of her removed after efforts through “proper channels” were unsuccessful.
Unfortunately, when viewing online content, asking “Could this be fake?” is a reasonable first response. Generative AI has made fake images and videos more convincing and harder to spot. The most frightening danger of deepfakes, like with most AI generated fakery, is that they can undermine a shared understanding of reality. When you start off from a position where everything is fake, understanding what is real becomes much harder. To combat this, we will all need to develop our own discernment when viewing content online. The good news is this is a skill we can all develop with practice.
Rather than scouring images and videos for AI tells like muddled text and extra fingers, using our information evaluation skills is a more technologically resilient practice. Does the image or video come from a reliable source? Can you trace the video back to its original human creator? What is the apparent purpose of the video? Is it trying to sell you something or prompt you to provide sensitive information? Does it elicit a strong emotional reaction? These initial questions should trigger your inner skeptic and help you discern whether something is real or fake.
Learn more
This article is real - but AI-generated deepfakes look damn close and are scamming people. 2024. Nick Logan. (CBC News) Last accessed 2025/05/28.
Meta Removes Fake AI Ads With Jaime Lee Lee Curtis After She Appealed to Mark Zuckerberg to Take Down 'Bulls--' Videos. 2025. Todd Spangler. (Variety) Last accessed 2025/05/28.
Meta Blocked News in Canada. Ads for Scams Are Taking Its Place. 2025. Thomas Seal. (Financial Post) Last accessed 2025/05/28.
This Canadian pharmacist is key figure behind world's most notorious deepfake porn site. 2025. Eric Szeto, Jordan Pearson and Ivan Angelovski. (CBC News) Last accessed 2025/05/28.
Danish MP calls for extradition of Canadian behind notorious AI porn site. 2025. Eric Szeto and Jordan Pearson. (CBC News) Last accessed 2025/05/28.
Fake election news ads are luring people into investment schemes. We got some taken down. 2025. Christian Paas-Lang, Nora Young and Ivan Angelovski. (CBC News) Last accessed 2025/05/28.
Correction log
Nothing here yet.