diff --git a/docs/hub/_toctree.yml b/docs/hub/_toctree.yml
index 611d832f0..ab5e9b512 100644
--- a/docs/hub/_toctree.yml
+++ b/docs/hub/_toctree.yml
@@ -390,6 +390,8 @@
title: Secrets Scanning
- local: security-protectai
title: "Protect AI"
+ - local: security-jfrog
+ title: "JFrog"
- local: moderation
title: Moderation
- local: paper-pages
diff --git a/docs/hub/security-jfrog.md b/docs/hub/security-jfrog.md
new file mode 100644
index 000000000..49f7d118e
--- /dev/null
+++ b/docs/hub/security-jfrog.md
@@ -0,0 +1,27 @@
+# Third-party scanner: JFrog
+
+
+[JFrog](https://jfrog.com/)'s security scanner detects malicious behavior in machine learning models.
+
+
+*Example of a report for [danger.dat](https://huggingface.co/mcpotato/42-eicar-street/blob/main/danger.dat)*
+
+We partnered with JFrog to provide scanning in order to make the Hub safer. Model files are scanned by the JFrog scanner and reports you the scanning result.
+
+JFrog's scanner is built with the goal to reduce false positives. Indeed, what we currently observe is that code contained within model weights is not always malicious. When code is detected in a file, JFrog's scanner will parse it and analyze to check for potential malicious usage.
+
+
+

+

+
+
+Here is an example repository you can check out to see the feature in action: [mcpotato/42-eicar-street](https://huggingface.co/mcpotato/42-eicar-street).
+
+## Model security refresher
+
+To share models, we serialize the data structures we use to interact with the models, in order to facilitate storage and transport. Some serialization formats are vulnerable to nasty exploits, such as arbitrary code execution (looking at you pickle), making sharing models potentially dangerous.
+
+As Hugging Face has become a popular platform for model sharing, we’d like to protect the community from this, hence why we have developed tools like [picklescan](https://github.com/mmaitre314/picklescan) and why we integrate third party scanners.
+
+Pickle is not the only exploitable format out there, [see for reference](https://github.com/Azure/counterfit/wiki/Abusing-ML-model-file-formats-to-create-malware-on-AI-systems:-A-proof-of-concept) how one can exploit Keras Lambda layers to achieve arbitrary code execution.
+
diff --git a/docs/hub/security.md b/docs/hub/security.md
index a4190a6eb..a84763a69 100644
--- a/docs/hub/security.md
+++ b/docs/hub/security.md
@@ -21,4 +21,5 @@ For any other security questions, please feel free to send us an email at securi
- [Pickle Scanning](./security-pickle)
- [Secrets Scanning](./security-secrets)
- [Third-party scanner: Protect AI](./security-protectai)
+- [Third-party scanner: JFrog](./security-jfrog)
- [Resource Groups](./security-resource-groups)