Skip to content
View gcmurphy's full-sized avatar

Highlights

  • Pro

Organizations

@securego

Block or report gcmurphy

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
  • osv Public

    Rust implementation of ossf osv specification

    Rust 13 7 Apache License 2.0 Updated Mar 10, 2025
  • codegate Public

    Forked from stacklok/codegate

    CodeGate: Security, Workspaces and Muxing for AI Applications, coding assistants, and agentic frameworks.

    Python Apache License 2.0 Updated Mar 6, 2025
  • Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪

    Python Apache License 2.0 Updated Mar 5, 2025
  • garak Public

    Forked from NVIDIA/garak

    the LLM vulnerability scanner

    Python Apache License 2.0 Updated Mar 3, 2025
  • llm-guard Public

    Forked from protectai/llm-guard

    The Security Toolkit for LLM Interactions

    Python MIT License Updated Mar 3, 2025
  • Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

    Python MIT License Updated Mar 3, 2025
  • nvtrust Public

    Forked from NVIDIA/nvtrust

    Ancillary open source software to support confidential computing on NVIDIA GPUs

    Python Apache License 2.0 Updated Feb 28, 2025
  • CLI tool for creating GitHub issues from Snyk project issues

    JavaScript 1 Updated Feb 19, 2025
  • promptmap Public

    Forked from utkusen/promptmap

    a prompt injection scanner for custom LLM applications

    Python GNU General Public License v3.0 Updated Feb 16, 2025
  • Set of tools to assess and improve LLM security.

    Python Other Updated Feb 14, 2025
  • This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses

    Python MIT License Updated Jan 22, 2025
  • hello Public

    Updated Nov 6, 2024
  • The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.

    Python Apache License 2.0 Updated Nov 4, 2024
  • hackGPT Public

    Forked from NoDataFound/hackGPT

    I leverage OpenAI and ChatGPT to do hackerish things

    Jupyter Notebook Updated Oct 25, 2024
  • ps-fuzz Public

    Forked from prompt-security/ps-fuzz

    Make your GenAI Apps Safe & Secure 🚀 Test & harden your system prompt

    Python MIT License Updated Oct 16, 2024
  • HouYi Public

    Forked from LLMSecurity/HouYi

    The automated prompt injection framework for LLM-integrated applications.

    Python Apache License 2.0 Updated Sep 12, 2024
  • rebuff Public

    Forked from protectai/rebuff

    LLM Prompt Injection Detector

    TypeScript Apache License 2.0 Updated Aug 7, 2024
  • Universal and Transferable Attacks on Aligned Language Models

    Python MIT License Updated Aug 2, 2024
  • Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers …

    Python Updated Jul 28, 2024
  • source for llmsec.net

    Updated Jul 24, 2024
  • offsecml Public

    Forked from 5stars217/offsecml

    source code for the offsecml framework

    Updated Jun 6, 2024
  • Dropbox LLM Security research code and results

    Python Apache License 2.0 Updated May 21, 2024
  • PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to adversarial prompt attacks. 🏆 Best Paper Awards @ NeurIPS ML …

    Python MIT License Updated Feb 26, 2024
  • cvss Public

    Rust library for working with CVSS

    Rust 1 Apache License 2.0 Updated Feb 14, 2024
  • LLMFuzzer Public

    Forked from mnns/LLMFuzzer

    🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integra…

    Python MIT License Updated Feb 12, 2024
  • vigil-llm Public

    Forked from deadbits/vigil-llm

    ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs

    Python Apache License 2.0 Updated Jan 31, 2024
  • mitmproxy Public

    Forked from mitmproxy/mitmproxy

    An interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers.

    Python 1 MIT License Updated Jan 25, 2024
  • plexiglass Public

    Forked from safellama/plexiglass

    A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).

    Python Apache License 2.0 Updated Dec 25, 2023
  • BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).

    Makefile Apache License 2.0 Updated Oct 27, 2023
  • counterfit Public

    Forked from Azure/counterfit

    a CLI that provides a generic automation layer for assessing the security of ML models

    Python MIT License Updated Oct 4, 2023