Deepfakes: Deception in the New Age

You’ve likely heard about Artificial Intelligence (AI) and how it’s transforming industries by improving efficiency, reducing errors and enabling smarter operations. But while AI can bring many benefits, it also poses new risks, especially when used maliciously.
One such threat is the rise of “Deepfakes” – highly convincing fake audio, images or videos created using AI for the purpose of deception. As your trusted banking partner, we want to help you understand what Deepfakes are, how they’re being used by cybercriminals, and what you can do to protect yourself.
What are Deepfakes?
AI can create synthetic images, audio or videos that look real. A deepfake is when AI is used to create these fakes for the purpose of deceiving others. The name “Deepfake” comes from a combination of “deep learning” (an AI term) and “fake”.
Deepfake Technologies:
- Face Swapping- replacing a person’s face with another in a video or photo.
- Voice Cloning- Using AI to mimic someone’s voice.
- Lip Syncing- Making a person’s lips appear to match a different audio.
What makes deepfakes especially concerning is how easy it has become for fraudsters to impersonate people you know and trust, including colleagues, family members or even financial representatives.
The most damaging of deepfakes is when a cyber-criminal creates a fake image, audio or video of people that you or someone close to you know, making them do things they never did. For example, a scammer might clone a voice that sounds like a company executive or friend to request sensitive information or authorize a transaction. What makes deepfakes especially dangerous is how easily criminals can replicate anyone, making them do anything, and make it appear 100% real.
Dangers of Deepfakes:
- Identity Theft- Deepfakes potentially allow criminals to commit account takeovers under someone else’s name or likeness
- Steal sensitive information- Deepfakes can be used as a tool of impersonation to steal another person’s account information, credentials or credit card information
- Reputational Harm- Misusing your image or voice in fabricated, inappropriate or misleading content.
- Political Campaigns- Used to sway public opinion
- Harassment or Blackmail- Misuse of a person’s image on explicit content to damage their reputation
How to spot Deepfakes:
While deepfakes are becoming more convincing, there are still red flags to watch for. Focus on the context and ask yourself does the image, audio, or video make sense? Here are some tips:
- It is always best to trust your instincts. Ask yourself does something feel fake about the interaction? Is there an urgency about the discussion and/or is the other person behaving strangely?
- Does your conversation make sense? Is the other person asking for confidential information they should already have access to? For example, are they asking for login credentials or sensitive account or customer information?
- Verify through trusted channels. If you receive a suspicious message or call, contact the person through a verified phone number or another trusted method – not the number or link they provide.
At Fieldpoint Private, we’re continually updating our security measures to stay ahead of emerging threats like Deepfakes. We also rely on our relationship with you to help protect your accounts. If anything ever seems suspicious, please don’t hesitate to reach out to us directly. Thank you for continuing to bank securely with us.