How Canadians are losing money to artificial intelligence at an alarming rate
“This is your grandson. I was just in a car crash, and I’m being held in jail. I need $5 000 to make bail.”
After uploading less than 30 seconds of me talking, I was able to create a half-decent clone of my voice that could say this exact quotation. While it might not fool everyone, I know that it would be enough to convince my grandma.
I used the website Speechify to do this, but there are many other options to choose from that are just as easy to use, and most importantly: free.
And chances are, this technology is only going to get better and better.
Like any hot new tech, it’s no surprise that people have found new and elaborate ways to commit crimes with it. As the annual Fraud Prevention Month has come to an end, it’s time to look at whether Canadians are prepared for the changing landscape of financial scams powered by artificial intelligence.
AI-powered scams
Earlier this year, Hong Kong police reported that a finance employee was tricked out of $35 million CAD of his firm’s money. The culprits? His bosses — or so he thought. As it turned out, the employee was not on a video conference call with senior executives as he believed, but rather pre-recorded deepfakes of them.
Deepfakes are “media manipulations that are based on advanced artificial intelligence (AI), where images, voices, videos, or text are digitally altered or fully generated by AI,” according to the Canadian government, which also calls them “a real threat to a Canadian future.” It has become incredibly easy to look and sound like somebody — anybody — else.
But this isn’t just some phenomenon that only appears in high-level corporate espionage. It might be happening to your grandparents.
In late February, Nanaimo RCMP were bombarded with reports that many residents received fraudulent phone calls claiming that their family members were being held in jail and needed money to be released. AI had been used to copy the voices of the victims’ loved ones.
“While the scenario varied somewhat, it involved their grandson being arrested after a motor vehicle accident involving a pregnant woman. To be released from jail, a large sum of money had to be delivered immediately,” reads the RCMP release. Some victims lost $3 000–8 000.
Two individuals were arrested at the Vancouver International Airport for scamming over $20 000 from Saanich residents using similar tactics. It is unclear whether they are connected to the cases from Nanaimo.
“There could be 10–20 people in any given location calling people all day long,” said Gary O’Brien, a Nanaimo RCMP spokesperson.
This type of scam, known commonly as the ‘grandparent scam,’ isn’t anything new, but these advancing technologies allow them to be done more realistically than ever before.
“Fraud has rapidly evolved over the last 20 years and has become much more sophisticated,” said Josephine Palumbo, a deputy commissioner from the Canadian Competition Bureau.
“We’ve transitioned from telemarketing scams to increasingly convincing AI-generated fraud,” added Palumbo. “Fraudsters are incorporating artificial intelligence into old schemes to make them more sophisticated and convincing.”
A report from Statista and Sumsub shows that Canada has experienced a 477 per cent increase in deepfake-related fraud cases from 2022–2023. While that is a staggering number, it pales in comparison to how hard some other countries have been hit. The United States, for example, had a 3 000 per cent increase in the same time frame.
This means Canadians are losing money to fraud — a lot of it. Reported losses reached $567 million for Canadians in 2023. However, the Canadian Anti-Fraud Centre (CAFC) says that only 5–10 per cent of Canadian fraud victims report it.
AI technology isn’t just being used to scare older people out of their money with cloned voices. Investment opportunity scams are becoming increasingly detrimental to Canadians as well. Promises of making a fortune off a hot new cryptocurrency — no matter how tempting — are something to stay away from.
Sammy Wu, a manager of investigations from the British Columbia Security Commission (BCSC) explained that as AI scams advance, they are becoming harder for the BCSC to detect. Poorly designed websites with grammatical errors are no longer the hallmark of a scam.
“[Initially] we [saw] a lot of spelling errors and we [saw] a lot of mistakes,” said Wu. “Now, they’re very polished, and I think a lot of these have AI components.”
As these bogus investment opportunities become more believable, it’s easier now than ever to fall for something that seems too good to be true. To make matters even more complicated, fraudsters are also using deepfaked testimonials of celebrities to promote what they’re peddling.
A convincingly accurate Elon Musk could tell you about a new opportunity guaranteed to make you money, and before you know it your initial investment is gone. What’s easier than stealing money from someone? Having them give it to you themselves.
As the barrier of entry to this technology disappears, Wu says entities like the BCSC are struggling to keep up. While AI-generated websites can be shut down, his description of the problem evokes a game of Whack-a-Mole.
“One shuts down, 10 pop up, 20 pop up. You can never catch up,” said Wu. “The amount of losses is tremendous.”
The unfortunate truth is that if you do fall victim to a scam, chances are, that money is gone for good. In the last three years, the CAFC has only been able to help recover $6.7 million, or about one per cent of what was lost in 2023 alone.
Instead, experts say the best thing to do is to educate yourself to make sure that it doesn’t happen again.
The legal risks of artificial intelligence
While AI is used by some criminals to scam people out of money, it is also finding a place on the other side of the law. But as this technology advances at an accelerated rate, the rules and regulations have been left playing catch up.
The British Columbia Law Institute released a “consultation paper on artificial intelligence and civil liability” last year. It states that “Artificial intelligence has brought about a very new context, one in which self-directing machines acting autonomously may cause harm to humans, to their property, or other interests protected by law.” As the ethics of using AI in a setting with real legal implications continue to become increasingly complicated, the rules and regulations become even more important.
A B.C. lawyer recently apologized for using the popular AI software Chat GPT to help her research for a case. The AI suggested two cases for her to cite, but they turned out to be pure fiction.
“These models are known to hallucinate, to give misleading answers, [and] just completely make up answers,” said Payam Mousavi, an applied research scientist from the Alberta Machine Intelligence Institution.
While the term “hallucination” has become commonplace when referring to incorrect information generated by AI, it does lend itself to putting humanistic traits onto what is in reality just a collection of algorithms.
“When you chat with a chatbot and it sounds exactly like a human, it seems to understand,” said Mousavi. “If you expect them to be exactly like humans and then they fail in catastrophic and sometimes funny ways, you realize, ‘Okay, I’m just dealing with a machine.’”
But if a large company has an AI chatbot employed, these “funny” mistakes can mean real trouble — or even a lawsuit. Air Canada has already felt the effects of this.
In late 2022, the airline’s chatbot told a customer that he could receive a bereavement discount for his flights. After booking, he was told that he had to have applied for the discount prior to his flight, which directly contradicted what he was told by the AI.
While Air Canada only had to pay the customer $812, this verdict shows that trusting an AI to speak for you can leave you on the hook for the information it provides.
In a world where ‘fake news’ and disinformation run rampant, public distrust perpetuated by inaccurate AI could be a real problem.
“Once you start distrusting [AI], that might actually transfer to distrusting people too,” said Mousavi. “I don’t know if I’m interacting with a machine or a human being. So basically my default mode, I feel like it’s slowly changing to assume all of it is garbage unless proven otherwise.”
While the Canadian government has encouraged employees to use AI tools such as ChatGPT in federal institutions, it also warns of the problems that can come from it.
The government’s guide on AI suggests “generating a summary of client information” as one way to use the technology. It makes you wonder how much of your personal information a government employee may be inputting into one of these constantly evolving and learning models. As we’re still learning about the capabilities of this new technology, the consequences of using it are unclear.
Protecting yourself
If you have been scammed or believe there has been an attempt to scam you, reporting it to the Canadian Anti-Fraud Centre, RCMP, or a provincial organization is an important step in helping stop it from happening to others.
But how do you prevent this from happening in the first place in a world where you can no longer trust your eyes or ears? Gone are the days when robotic voices and hands with too many fingers were tell-tale signs of AI tomfoolery.
Just like any scam, it comes down to looking at the bigger picture, and ultimately trusting your gut.
Cryptocurrency investment scams that promise high rates of return work by making you feel like you’re missing out, pushing you to invest quickly without thinking about any red flags. Similarly, something like the grandparent scam preys on rash decision making in a time of emergency.
“They create a sense of urgency,” said Wu. “Those are all classic signs.”
University students, especially international students, looking to make a little extra cash are the prime targets for another aspect of financial scams: becoming a money mule. These are people used to launder money for fraudsters without even knowing it.
“Someone approaches you and says, ‘Hey, we have a part time job here for you. What you can do is just open an account, make some money. I’ll just use your account to flow some money in and out,’” said Wu. “We do find a lot of students, they’re just not aware and they become victimized indirectly.”
As AI advances, it’s up to you to keep yourself and the people in your life educated and diligent on the ways that it can be used against you. If you’re not looking out for yourself, no one will.
Whether you’ve already integrated AI into your everyday life or refuse to interact with it at all, it’s here to stay.
I’ve now started to think about the content I have readily available to anyone on social media. Any Instagram videos that I uploaded for fun could be fuel to create a convincing AI replica of myself. Our digital footprint is an ever evolving database of personal information, ready to be used nefariously by anyone with an Wi-Fi signal.