Skip to content
@LAiSR-SK

Laboratory of AI Security Research(LAiSR)

👋 **Welcome to the Laboratory of AI Security Research (LAiSR) **

🎤 Who we are?

Located at Miami University and led by Dr. Samer Khamaiseh. LAiSR research group is passionate about exploring and addressing the evolving challenges within the realm of AI security. Our AI Research Laboratory is at the forefront of cutting-edge research to fortify AI models against adversarial attacks, enhance their robustness, and ensure their reliability in real-world scenarios.

❓ Why AI security?

  • Emerging Threats: As AI systems become more pervasive, they introduce new vulnerabilities such as adversarial attacks and privacy breaches. Our Research group plays a pivotal role in uncovering vulnerabilities and devising robust defenses.
  • Safeguarding Critical Systems: AI is increasingly integrated into critical infrastructure, healthcare, finance, and defense. Ensuring the security of these systems is non-negotiable. Rigorous research helps prevent catastrophic failures and protects lives and livelihoods.
  • Ethical Implications: AI decisions impact individuals and societies. Bias, fairness, and transparency are ethical concerns. Research informs guidelines and policies that promote responsible AI deployment, minimizing harm and maximizing benefits.

🔎 Research Focus

  • Adversarial Attacks: Our goal is to explore new adversarial attacks against AI models. The LAiSR group has introduced 3 novel adversarial attacks such as Target-X, Fool-X, and T2I-Nightmare attacks.
  • Adversarial Training: Exploring defense methods against adversarial attacks. Recently, LAiSR groups introduced the " VA: Various Attacks Framework for Robust Adversarial Training & ADT++: Advanced Adversarial Distributional Training with Class Robustness" adversarial training methods that promots AI models clean accuracy, roboust accuracy, and robusteness generalization more that the baseline defense methods .
  • GEN-AI: Investigate SOTA methods to protect user images from being misused by diffusion models. For example, the LAiSR group proposed ImagePatroit(Under_Review) that prevents diffusion models from maliciously adjusting images. -GEN-AI Robustness: Explore the pre and post-generation filters to protect Diffusion models from generating Not-Safe-for-Work (NSFW) contents. T2I-Vanguard: Post Generation Filter for Safe Text-2-Image Diffusion Models Generation that prevents T2I attacks from compromising the generation process of T2I diffusion models to produce NSFW images.

🚀 Research Projects

Below, we list some of the published and ongoing research project at LAiSR lab.

AI Robustness Testing Kit (AiR-TK) is an AI testing framework built upon PyTorch that enables the AI security community to evaluate the AI models against adversarial attacks easily and comprehensively. Air-TK supports adversarial training, the de-facto technique to improve the robustness of AI models against adversarial attacks. Having easy access to state-of-the-art adversarial attacks and the baseline adversarial training method in one place will help the AI security community to replicate, re-use, and improve the upcoming attacks and defense methods

By adding some magic to few pixles, ImagePatriot protect your image to be manipulated by diffusion models.

JPA, NightShade, and MMA are recent attacks against Text-2-Image diffusion models to generate Not Safe for Work(NSFW) images despit the pre/post filters. T2I-Vanguard is an ongoing project that aim to provide a shield for T2I models from being compromised by such attacks.

Fool-X, an algorithm to generate effective adversarial examples with the least perturbations that can fool state-of-the-art image classification neural. More detailes avaiable on project site. (under-Review of IEEE-BigData 2024)

Target-X, a novel and fast method for the construction of adversarial targeted images on large-scale datasets that can fool the state-of-the-art image classification neural networks. More info available on the project Repo.

ADT++ provides a fast adversarial training method for AI models to increase thier generlization robustness against more adaptive adversarial attacks such as Target-X and AutoAttack(AA).

By expolaring the class robustness, VA can be used to increase the robustness of AI models against varity of adversarail attacks. Specfically, gradient-based attacks

📸 Snapshots: Some fun pics from our research projects

[Steven and Aibak please add some pic by mentions the project name and then a pic. Add for Target-X, Fool-X, and ImagePatriot]

👥 Our Team

📫 Reach us

This GitHub account serves as a hub for our ongoing projects, publications, and collaborations. We welcome your engagement and encourage you to explore the exciting frontiers of AI security with us! Contact us here

Pinned Loading

  1. target-x target-x Public

    This research explores a novel targeted attack for neural network classifiers. This research has been led by Dr.Samer Khamaiseh with ongoing efforts of Deirdre Jost and Steven Chiacchira

    Python 1

Repositories

Showing 3 of 3 repositories
  • .github Public
    LAiSR-SK/.github’s past year of commit activity
    0 0 0 0 Updated Aug 23, 2024
  • LAiSR-SK Public
    LAiSR-SK/LAiSR-SK’s past year of commit activity
    0 1 0 0 Updated Jun 20, 2024
  • target-x Public

    This research explores a novel targeted attack for neural network classifiers. This research has been led by Dr.Samer Khamaiseh with ongoing efforts of Deirdre Jost and Steven Chiacchira

    LAiSR-SK/target-x’s past year of commit activity
    Python 0 1 0 0 Updated Jun 11, 2024

Top languages

Loading…

Most used topics

Loading…