Skip to content

This is my B.Tech Thesis Project which tries to attack the Neural network models in such a way that they will classify image incorrectly. It aims to expose the vulnerability and unreliability of various image processing models.

License

Notifications You must be signed in to change notification settings

yashjaiswal10/Adversarial-Attacks-on-Neural-Networks---B.Tech-Thesis-Project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AdversarialAttacks

BTP Thesis Project - Adversarial Attacks on Neural Networks

Our team developed the model under the Mentorship of Dr.Saumya Bhadauria and Dr.Yash Daultani to create an adversarial attack on the existing deep learning models by modifying input image such that it will be classified incorrectly.

We also Suggested the defenses against these adversarial attacks inorder to make the model robust so that it can classify perturbed images correctly.

Here are some screenshots of our model : Screenshot (166) Screenshot (165) Screenshot (164) Screenshot (167)

About

This is my B.Tech Thesis Project which tries to attack the Neural network models in such a way that they will classify image incorrectly. It aims to expose the vulnerability and unreliability of various image processing models.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published