malicious-ml-models-on-hugging-face-leverage-broken-pickle-format-to-evade-detection

Cybersecurity analysts have discovered a couple of harmful machine learning (ML) models on Hugging Face that employed an unconventional method of “corrupted” pickle files to avoid detection.
“The pickle files obtained from the specified PyTorch repositories disclosed the malicious Python content right at the start of the file,” ReversingLabs investigator Karlo Zanki stated in a report provided to The Hacker News.


Leave a Reply

Your email address will not be published. Required fields are marked *

Share This