Skip to content

This repo implements an FGSM brute force attack on Hugging Face image classifiers. It generates adversarial images by applying small perturbations to fool models into misclassification. Supports targeted attacks, adjustable epsilon values, and saves outputs for analysis. Ideal for learning AI evasion techniques.

License

Notifications You must be signed in to change notification settings

QiaoNPC/AdverserialImageGenerator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FGSM Brute Force Attack

This project demonstrates a Fast Gradient Sign Method (FGSM) brute force attack on image classification models hosted on Hugging Face 🤖. It generates adversarial examples by perturbing input images, attempting to force the model into misclassifying them.

Supports targeted attacks by specifying a desired misclassification label, and allows testing across different epsilon values.

Installation

pip install -r requirements.txt

Usage

The script takes the following arguments:

  --model: The name of the Hugging Face model.
  --image: The path to the input image.
  --epsilon: The epsilon value for the attack (default: 0.05).
  --size: The size of the input image for the model.
  --target: The target class name.
  --d: The directory to save adversarial images.

About

This repo implements an FGSM brute force attack on Hugging Face image classifiers. It generates adversarial images by applying small perturbations to fool models into misclassification. Supports targeted attacks, adjustable epsilon values, and saves outputs for analysis. Ideal for learning AI evasion techniques.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages