By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Sécurité Helvétique News | AmyrisSécurité Helvétique News | AmyrisSécurité Helvétique News | Amyris
  • Home
  • Compliance
    Compliance
    Show More
    Top News
    Shifting Sands: Leaders Are Feeling the Pressure of an Uncertain, Dynamic Risk Landscape
    23 February 2023
    How to Stay Ahead of Mobility Tax & Compliance Trends
    23 January 2024
    The Long and Winding Road to Custom-AI Compliance
    2 August 2024
    Latest News
    How 2025 Redefined Telemarketing Compliance
    1 December 2025
    Advice for the AI Boom: Use the Tools, Not Too Much, Stay in Charge
    25 November 2025
    Strange Bedfellows: How a Supreme Court Ruling Found Its Perfect Match in the Trump Administration
    19 November 2025
    Where in the Loop? Testing AI Across 120 Compliance Tasks to Find Out Where Humans Are Most Needed
    13 November 2025
  • Cyber Security
    Cyber Security
    Show More
    Top News
    SHQ Response Platform and Risk Centre to Enable Management and Analysts Alike
    13 May 2024
    Kimsuky APT Deploying Linux Backdoor Gomir in South Korean Cyber Attacks
    17 May 2024
    iPhone undeleted photos, and stealing Scarlett Johansson’s voice • Graham Cluley
    23 May 2024
    Latest News
    North Korean Hackers Target Developers with Malicious npm Packages
    30 August 2024
    Russian Hackers Exploit Safari and Chrome Flaws in High-Profile Cyberattack
    29 August 2024
    Vietnamese Human Rights Group Targeted in Multi-Year Cyberattack by APT32
    29 August 2024
    2.5 Million Reward Offered For Cyber Criminal Linked To Notorious Angler Exploit Kit
    29 August 2024
  • Technology
    Technology
    Show More
    Top News
    Dating Apps Are Dehumanizing. Let’s Try Something New
    1 August 2024
    Elon Musk’s X lets users sort replies to find more relevant comments
    10 August 2024
    Best Smart Home Gyms for 2024
    19 August 2024
    Latest News
    Why XSS still matters: MSRC’s perspective on a 25-year-old threat  | MSRC Blog
    9 September 2025
    Microsoft Bug Bounty Program Year in Review: $13.8M in Rewards | MSRC Blog
    28 August 2025
    Microsoft Bounty Program Year in Review: $16.6M in Rewards  | MSRC Blog
    27 August 2025
    postMessaged and Compromised | MSRC Blog
    26 August 2025
  • Businness
    Businness
    Show More
    Top News
    Missing Chinese banker was working to set up Singapore family office
    22 February 2023
    European countries step up delivery of tanks to Ukraine
    23 February 2023
    Borealis Foods to Go Public via Merger with Oxus Acquisition Corp
    24 February 2023
    Latest News
    AI labs like Meta, Deepseek, and Xai earned worst grades possible on an existential safety index
    6 December 2025
    Visa is moving its European headquarters to London’s Canary Wharf, FT reports
    5 December 2025
    Client Challenge
    4 December 2025
    Binance names cofounder Yi He as new co-CEO
    3 December 2025
  • ÉmissionN
    Émission
    Cyber Security Podcasts
    Show More
    Top News
    Cybercrime News For Oct. 17, 2024. Illinois Historical Society Facebook Hacked. WCYB Digital Radio.
    18 October 2024
    Stream episode Security Nudge. Use Strong & Unique Passwords. Sponsored By CybSafe. by Cybercrime Magazine podcast
    26 October 2024
    Vulnerability Exposed. Protecting Your Organization. Confidence Staveley, CyberSafe Foundation.
    3 November 2024
    Latest News
    Stream episode Cybercrime Magazine Update: Cybercrime In India. Sheer Volume Overwhelming Police Forces. by Cybercrime Magazine podcast
    3 March 2025
    Autonomous SOC. Why It’s A Breakthrough For The Mid-Market. Subo Guha, SVP of Product, Stellar Cyber
    2 March 2025
    Cyber Safety. Protecting Families From Smart Toy Risks. Scott Schober, Author, "Hacked Again."
    2 March 2025
    Cybercrime News For Feb. 25, 2025. Hackers Steal $49M from Infini Crypto Fintech. WCYB Digital Radio
    2 March 2025
Search
Cyber Security
  • Application Security
  • Darknet
  • Data Protection
  • network vulnerability
  • Pentesting
Compliance
  • LPD
  • RGPD
  • Finance
  • Medical
Technology
  • AI
  • MICROSOFT
  • VERACODE
  • CHECKMARKX
  • WITHSECURE
  • Amyris
  • Contact
  • Disclaimer
  • Privacy Policy
  • About us
© 2023 Sécurité Helvétique NEWS par Amyris Sarl. Tous droits réservés
Reading: How to Spot AI Audio Deepfakes at Election Time
Share
Sign In
Notification Show More
Font ResizerAa
Sécurité Helvétique News | AmyrisSécurité Helvétique News | Amyris
Font ResizerAa
  • Home
  • Compliance
  • Cyber Security
  • Technology
  • Business
Search
  • Home
    • Compliance
    • Cyber Security
    • Technology
    • Businness
  • Legal Docs
    • Contact us
    • Disclaimer
    • Privacy Policy
    • About us
Have an existing account? Sign In
Follow US
  • Amyris
  • Contact
  • Disclaimer
  • Privacy Policy
  • About us
© 2023 Sécurité Helvétique par Amyris Sarl.
Sécurité Helvétique News | Amyris > Blog > network vulnerability > How to Spot AI Audio Deepfakes at Election Time
network vulnerability

How to Spot AI Audio Deepfakes at Election Time

webmaster
Last updated: 2024/04/28 at 1:07 PM
webmaster
Share
8 Min Read
SHARE


Contents
What makes audio deepfakes so hard to spot?  How to spot audio deepfakes. Living in a world of AI audio deepfakes.  Introducing McAfee+

We’ve said it several times in our blogs — it’s tough knowing what’s real and what’s fake out there. And that’s absolutely the case with AI audio deepfakes online. 

Bad actors of all stripes have found out just how easy, inexpensive, and downright uncanny AI audio deepfakes can be. With only a few minutes of original audio, seconds even, they can cook up phony audio that sounds like the genuine article — and wreak all kinds of havoc with it. 

A few high-profile cases in point, each politically motivated in an election year where the world will see more than 60 national elections: 

  • In January, thousands of U.S. voters in New Hampshire received an AI robocall that impersonated President Joe Biden, urging them not to vote in the primary.  
  • In the UK, more than 100 deepfake social media ads impersonated Prime Minister Rishi Sunak on the Meta platform last December.i  
  • Similarly, the 2023 parliamentary elections in Slovakia spawned deepfake audio clips that featured false proposals for rigging votes and raising the price of beer.ii 

Yet deepfakes have targeted more than election candidates. Other public figures have found themselves attacked as well. One example comes from Baltimore County in Maryland, where a high school principal has allegedly fallen victim to a deepfake attack.  

It involves an offensive audio clip that resembles the principal’s voice which was posted on social media, news of which spread rapidly online. The school’s union has since stated that the clip was an AI deepfake, and an investigation is ongoing.iii In the wake of the attack, at least one expert in the field of AI deepfakes said that the clip is likely a deepfake, citing “distinct signs of digital splicing; this may be the result of several individual clips being synthesized separately and then combined.”iv 

And right there is the issue. It takes expert analysis to clinically detect if an audio clip is an AI deepfake. 

What makes audio deepfakes so hard to spot?  

Audio deepfakes give off far fewer clues, as compared to the relatively easier-to-spot video deepfakes out there. Currently, video deepfakes typically give off several clues, like poorly rendered hands and fingers, off-kilter lighting and reflections, a deadness to the eyes, and poor lip-syncing. Clearly, audio deepfakes don’t suffer any of those issues. That indeed makes them tough to spot. 

The implications of AI audio deepfakes online present themselves rather quickly. In a time where general awareness of AI audio deepfakes lags behind the availability and low cost of deepfake tools, people are more prone to believe an audio clip is real. Until “at home” AI detection tools become available to everyday people, skepticism is called for.  

Just as “seeing isn’t always believing” on the internet, we can “hearing isn’t always believing” on the internet as well. 

How to spot audio deepfakes. 

The people behind these attacks have an aim in mind. Whether it’s to spread disinformation, ruin a person’s reputation, or conduct some manner of scam, audio deepfakes look to do harm. In fact, that intent to harm is one of the signs of an audio deepfake, among several others. 

Listen to what’s actually being said. In many cases, bad actors create AI audio deepfakes designed to build strife, deepen divisions, or push outrageous lies. It’s an age-old tactic. By playing on people’s emotions, they ensure that people will spread the message in the heat of the moment. Is a political candidate asking you not to vote? Is a well-known public figure “caught” uttering malicious speech? Is Taylor Swift offering you free cookware? While not an outright sign of an AI audio deepfake alone, it’s certainly a sign that you should verify the source before drawing any quick conclusions. And certainly before sharing the clip. 

Think of the person speaking. If you’ve heard them speak before, does this sound like them? Specifically, does their pattern of speech ring true or does it pause in places it typically doesn’t … or speak more quickly and slowly than usual? AI audio deepfakes might not always capture these nuances. 

Listen to their language. What kind of words are they saying? Are they using vocabulary and turns of phrase they usually don’t? An AI can duplicate a person’s voice, yet it can’t duplicate their style. A bad actor still must write the “script” for the deepfake, and the phrasing they use might not sound like the target. 

Keep an ear out for edits. Some deepfakes stitch audio together. AI audio tools tend to work better with shorter clips, rather than feeding them one long script. Once again, this can introduce pauses that sound off in some way and ultimately affect the way the target of the deepfake sounds. 

Is the person breathing? Another marker of a possible fake is when the speaker doesn’t appear to breathe. AI tools don’t always account for this natural part of speech. It’s subtle, yet when you know to listen for it, you’ll notice it when a person doesn’t pause for breath. 

Living in a world of AI audio deepfakes. 

It’s upon us. Without alarmism, we should all take note that not everything we see, and now hear, on the internet is true. The advent of easy, inexpensive AI tools has made that a simple fact. 

The challenge that presents us is this — it’s largely up to us as individuals to sniff out a fake. Yet again, it comes down to our personal sense of internet street smarts. That includes a basic understanding of AI deepfake technology, what it’s capable of, and how fraudsters and bad actors put it to use. Plus, a healthy dose of level-headed skepticism. Both now in this election year and moving forward. 

[i] https://www.theguardian.com/technology/2024/jan/12/deepfake-video-adverts-sunak-facebook-alarm-ai-risk-election

[ii] https://www.bloomberg.com/news/articles/2023-09-29/trolls-in-slovakian-election-tap-ai-deepfakes-to-spread-disinfo

[iii] https://www.baltimoresun.com/2024/01/17/pikesville-principal-alleged-recording/

[iv] https://www.scientificamerican.com/article/ai-audio-deepfakes-are-quickly-outpacing-detection/

Introducing McAfee+

Identity theft protection and privacy for your digital life





Source link

You Might Also Like

Ways to Tell if a Website Is Fake

McAfee Named ADVANCED+ in Real-World Protection — What That Means for You

How to Spot Charity Scams and Donate Safely this Giving Season

Demystifying the Offensive Security Landscape

Black Friday 2025: Strengthen Retail Cyber

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.

By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Whatsapp Whatsapp LinkedIn Reddit Telegram Email Copy Link Print
Share
Previous Article Food Safety and GMP for Bakery
Next Article News Roundup: Confidence in AML Lags, but So Do Budgets
Leave a comment Leave a comment

Comments (0) Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

235.3k Followers Like
69.1k Followers Follow
11.6k Followers Pin
56.4k Followers Follow
136k Subscribers Subscribe
4.4k Followers Follow
- Advertisement -
Ad imageAd image

Latest News

From Prompt Injection To Account Takeover · Embrace The Red
Pentesting 6 December 2025
From Prompt Injection To Account Takeover · Embrace The Red
Pentesting 6 December 2025
Ways to Tell if a Website Is Fake
network vulnerability 6 December 2025
From Prompt Injection To Account Takeover · Embrace The Red
Pentesting 6 December 2025
//

We influence 20 million users and is the number one business and technology news network on the planet

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

Loading
Sécurité Helvétique News | AmyrisSécurité Helvétique News | Amyris
Follow US
© 2023 Sécurité Helvétique NEWS par Amyris Sarl. Tous droits réservés
Amyris news letter
Join Us!

Subscribe to our newsletter and never miss our latest news, podcasts etc..

Loading
Zero spam, Unsubscribe at any time.
login Amyris SH
Welcome Back!

Sign in to your account

Lost your password?