Skip to content

Source code to generate adversarial examples to deceive DeepmMind's perceiver model.

Notifications You must be signed in to change notification settings

Paulescu/deceiving_deepmind_perceiver

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 

Repository files navigation

The Gap Between Research And Robust AI

Table of Contents

  1. What is this repo about?
  2. Blog post
  3. Notebook
  4. Let's connect!

What is this repo about?

Would you say Deep Learning models have become so good, that robust AI systems are no longer a dream, but a reality?

Do you think you can safely use the latest models published by researchers in any real-world problem, like self-driving cars? Or face recognition software at airports?

Convinced that machines are already better than humans at processing and understanding images?

I was too. Until I realized it is possible to deceive a state-of-the-art model, like DeepMind Perceiver, with a few lines of code

In this repo, I show you how.

Blog post

📝 The Gap Between Research And Robust AI

Notebook

You can run this tutorial on your local computer or directly on Colab.

To run locally

Create a virtual environment for Python >= 3.8, and activate it:

$ (venv) git clone https://github.com/Paulescu/fooling_deepmind.git
$ (venv) pip install jupyter
$ (venv) jupyter notebook

Then open the notebook

To run on Google Colab

Click on this button

Let's connect

If you want to learn more about real-world ML topics and become a better data scientist

👉 Subscribe to the datamachines newsletter.

👉🏽 Follow me on Twitter and/or LinkedIn

About

Source code to generate adversarial examples to deceive DeepmMind's perceiver model.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published