Skip to content
View zhaoxu98's full-sized avatar

Highlights

  • Pro

Block or report zhaoxu98

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
zhaoxu98/README.md

GitHub followers visitors

🍊 About Me | 关于我

🍵 Skills | 技能

🍨 Others | 其他

GitHub Contribution Snake

Pinned Loading

  1. usail-hkust/JailTrickBench Public

    Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)

    Python 131 9

  2. ThuCCSLab/Awesome-LM-SSP Public

    A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).

    1.3k 85

  3. usail-hkust/LLMTSCS Public

    Official code for article "LLMLight: Large Language Models as Traffic Signal Control Agents".

    Python 193 26

  4. usail-hkust/Awesome-Urban-Foundation-Models Public

    An Awesome Collection of Urban Foundation Models (UFMs).

    160 14

  5. usail-hkust/Jailjudge Public

    JAILJUDGE: A comprehensive evaluation benchmark which includes a wide range of risk scenarios with complex malicious prompts (e.g., synthetic, adversarial, in-the-wild, and multi-language scenarios…

    Python 41

  6. SheltonLiu-N/AutoDAN Public

    [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models".

    Python 308 47