HydroX AI
Popular repositories Loading
-
pii-masker
pii-masker PublicPII Masker is an open-source tool for protecting sensitive data by automatically detecting and masking PII using advanced AI, powered by DeBERTa-v3. It provides high-precision detection, scalable p…
-
Enhancing-Safety-in-Large-Language-Models
Enhancing-Safety-in-Large-Language-Models PublicPrecision Knowledge Editing (PKE): A novel method to reduce toxicity in LLMs while preserving performance, with robust evaluations and hands-on demonstrations.
Jupyter Notebook 10
-
react-native-logs
react-native-logs PublicForked from mowispace/react-native-logs
Performance-aware simple logger for React-Native and Expo with namespaces, custom levels and custom transports (colored console, file writing, etc.)
TypeScript 2
-
-
go-openai
go-openai PublicForked from sashabaranov/go-openai
OpenAI ChatGPT, GPT-3, GPT-4, DALL·E, Whisper API wrapper for Go
Go 1
Repositories
- DrAttack Public Forked from xirui-li/DrAttack
Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers
- TextAttack Public Forked from QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
- TOXIGEN Public Forked from microsoft/TOXIGEN
This repo contains the code for generating the ToxiGen dataset, published at ACL 2022.
- DRA Public Forked from LLM-DRA/DRA
[USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise and Reconstruction"
- ReNeLLM Public Forked from NJUNLP/ReNeLLM
The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily".
- GPTFuzz Public Forked from sherdencooper/GPTFuzz
Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts
- AutoDAN Public Forked from SheltonLiu-N/AutoDAN
The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models".
- nanoGCG Public Forked from GraySwanAI/nanoGCG
A fast + lightweight implementation of the GCG algorithm in PyTorch
People
This organization has no public members. You must be a member to see who’s a part of this organization.
Top languages
Loading…
Most used topics
Loading…