diff --git a/.cursor/rules/mintlify-api-automation.mdc b/.cursor/rules/mintlify-api-automation.mdc new file mode 100644 index 0000000..5e5e1b8 --- /dev/null +++ b/.cursor/rules/mintlify-api-automation.mdc @@ -0,0 +1,296 @@ +--- +description: +globs: +alwaysApply: false +--- +# OpenAPI Setup +Source: https://mintlify.com/docs/api-playground/openapi/setup + +Reference OpenAPI endpoints in your docs pages + +## Add an OpenAPI specification file + +To describe your endpoints with OpenAPI, make sure you have a valid OpenAPI +document in either JSON or YAML format that follows the +[OpenAPI specification](https://swagger.io/specification/). Your document must +follow OpenAPI specification 3.0+. + + +To validate your OpenAPI spec, use our [CLI](https://www.npmjs.com/package/mintlify) and run this command:
`mintlify openapi-check `
+ +## Auto-populate API pages + +The fastest way to get started with OpenAPI is to add an `openapi` field to a tab in the `docs.json`. This field can contain either the path to an OpenAPI document in your docs repo, or the URL of a hosted OpenAPI document. Mintlify will automatically generate a page for each OpenAPI operation and place them in the tab. + +**Example with Tabs:** + +```json {5} +"navigation": { + "tabs": [ + { + "tab": "API Reference", + "openapi": "https://petstore3.swagger.io/api/v3/openapi.json" + } + ] +} +``` + +![](https://mintlify.s3.us-west-1.amazonaws.com/mintlify/images/autogeneration-with-tabs.png) + +**Example with Groups:** + +```json {8-11} +"navigation": { + "tabs": [ + { + "tab": "API Reference", + "groups": [ + { + "group": "Endpoints", + "openapi": { + "source": "/path/to/openapi-1.json", + "directory": "api-reference" + } + } + ] + } + ] +} +``` + +When using this option, the metadata for the generated pages will have the following default values: + +* `title`: The `summary` field from the OpenAPI operation, if present. Otherwise a title generated from the HTTP method and endpoint. + +* `description`: The `description` field from the OpenAPI operation, if present. + +* `version`: The `version` value from the anchor or tab, if present. + +There are some scenarios in which the default behavior isn't sufficient. If you need more customizability, you can create an MDX page for your OpenAPI operation, and modify it just like any other MDX page. + +## Create MDX files for API pages + +If you want to customize the page metadata, add additional content, omit certain OpenAPI operations, or reorder OpenAPI pages in your navigation, you'll need an MDX page for each operation. Here is [an example MDX OpenAPI page](https://github.com/mindsdb/mindsdb/blob/main/docs/rest/databases/create-databases.mdx) from [MindsDB](https://docs.mindsdb.com/rest/databases/create-databases). + +![](https://mintlify.s3.us-west-1.amazonaws.com/mintlify/images/mindsdb.png) + +### Manually specify files + +You can always create an MDX page manually, and reference the OpenAPI operation in the page's metadata using the `openapi` field. + + + +By using the OpenAPI reference, the name, description, parameters, responses, +and the API playground will be automatically generated from the OpenAPI document. + +If you have multiple OpenAPI files, include the path to the OpenAPI file to ensure Mintlify finds the correct OpenAPI document. This is not required if you have +only one OpenAPI file - it will automatically detect your OpenAPI file. + + + ```md Example + --- + title: "Get users" + openapi: "/path/to/openapi-1.json GET /users" + --- + ``` + + ```md Format + --- + title: "title of the page" + openapi: openapi-file-path method path + --- + ``` + + +
+ + + In most cases, the method and path must match the method and path specified + in the OpenAPI document exactly. If the endpoint doesn't exist in the OpenAPI + file, the page will be empty. + + For webhooks, replace the method (i.e. "POST") with "webhook" (case insensitive) + and the correct method will be generated. + + +### Autogenerate files + +For large OpenAPI documents, creating one MDX page for each OpenAPI operation can be a lot of work. To make it easier, we created a local OpenAPI page scraper. + +Our Mintlify [scraper](https://www.npmjs.com/package/@mintlify/scraping) +autogenerates MDX files for your OpenAPI endpoints. + +Each generated page will correspond to an OpenAPI operation under the "paths" section of the OpenAPI schema. +If your OpenAPI document is version 3.1+, the scraper will also generate pages for webhooks under the "webhooks" section of the OpenAPI schema. + +```bash +npx @mintlify/scraping@latest openapi-file +``` + +Add the `-o` flag to specify a folder to populate the files into. If a folder is +not specified, the files will populate in the working directory. + +```bash +npx @mintlify/scraping@latest openapi-file -o api-reference +``` + +Learn more about our scraping package [here](https://www.npmjs.com/package/@mintlify/scraping). + +The scraper will output an array of +[Navigation entries](/settings/global#structure) containing your OpenAPI MDX +files. You can either append these entries to your existing Navigation, or +reorder and add the files to your navigation manually. + + + If your OpenAPI document is invalid, the files will not autogenerate. + + +## Create MDX files for OpenAPI schemas + +Mintlify also allows you to create individual pages for any OpenAPI schema +defined in an OpenAPI document's `components.schemas` field: + + + ```md Example + --- + openapi-schema: OrderItem + --- + ``` + + ```md Format + --- + openapi-schema: "schema-key" + --- + ``` + + + +# Writing OpenAPI +Source: https://mintlify.com/docs/api-playground/openapi/writing-openapi + +Use OpenAPI features to enhance your documentation + +## Describing your API + +There are many great tools online for learning about and constructing OpenAPI documents. Here are our favorites: + +* [Swagger's OpenAPI Guide](https://swagger.io/docs/specification/about/) for familiarizing yourself with the OpenAPI syntax +* [OpenAPI v3.1.0 Specification](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.1.0.md) for all the details about the newest OpenAPI specification +* [Swagger & OpenAPI Validator](https://editor.swagger.io/) for debugging your OpenAPI document +* [Swagger's Editor](https://editor.swagger.io/) for seeing examples in action + + + Swagger's OpenAPI Guide is for OpenAPI v3.0, but nearly all of the information is applicable to v3.1. For more information on the differences between v3.0 and v3.1, check out [OpenAPI's blog post](https://www.openapis.org/blog/2021/02/16/migrating-from-openapi-3-0-to-3-1-0). + + +## Specifying the URL for your API + +In an OpenAPI document, different API endpoints are specified by their paths, like `/users/{id}`, or maybe simply `/`. To specify the base URL to which these paths should be appended, OpenAPI provides the `servers` field. This field is necessary to use some Mintlify features like the API Playground. Read how to configure the `servers` field in the [Swagger documentation](https://swagger.io/docs/specification/api-host-and-base-path/). + +The API Playground will use these server URLs to determine where to send requests. If multiple servers are specified, a dropdown will appear to allow toggling between servers. If no server is supplied, the API Playground will use simple mode, as there is no way to send a request. + +If different endpoints within your API exist at different URLs, you can [override the server field](https://swagger.io/docs/specification/api-host-and-base-path/#:~:text=%C2%A0%C2%A0%C2%A0%C2%A0%C2%A0%C2%A0%C2%A0%C2%A0%C2%A0%20%2D%20southeastasia-,Overriding%20Servers,-The%20global%20servers) for a given path or operation. + +## Specifying authentication + +Nearly all APIs require some method of authentication. OpenAPI provides the `securitySchemes` field for defining the methods of authentication used throughout your API, with simple configuration for the most common authentication types - [Basic](https://swagger.io/docs/specification/authentication/basic-authentication/), [Bearer](https://swagger.io/docs/specification/authentication/bearer-authentication/), and [API Keys](https://swagger.io/docs/specification/authentication/api-keys/). To apply these authentication methods to your endpoints, OpenAPI uses the `security` field. The syntax for defining and applying authentication is a bit unintuitive, so definitely check out [Swagger's documentation and examples](https://swagger.io/docs/specification/authentication/) on the topic. + +The API descriptions and API Playground will add authentication fields based on the security configurations in your OpenAPI document. + +If different endpoints within your API require different methods of authentication, you can [override the security field](https://swagger.io/docs/specification/authentication/#:~:text=you%20can%20apply%20them%20to%20the%20whole%20API%20or%20individual%20operations%20by%20adding%20the%20security%20section%20on%20the%20root%20level%20or%20operation%20level%2C%20respectively.) for a given operation. + + +# Playground +Source: https://mintlify.com/docs/api-playground/overview + +GET /plants/{id} +Enable users to interact with your API + +The API playground is an interactive environment to make requests and preview an API endpoint. + + + Autogenerating API pages with OpenAPI will automatically generate API + playground. Read more about using OpenAPI [here](/api-playground/openapi). + + + +# Troubleshooting +Source: https://mintlify.com/docs/api-playground/troubleshooting + +Common issues with API References + +API pages are complicated. As a result, there are a lot of things that can go wrong. +Here's a list of common issues we've seen customers run into: + + + + In this scenario, it's likely that either Mintlify cannot find your OpenAPI document, + or your OpenAPI document is invalid. + + Running `mintlify dev` locally should reveal some of these issues. + + To verify your OpenAPI document will pass validation: + + 1. Visit [this validator](https://apitools.dev/swagger-parser/online/) + 2. Switch to the "Validate text" tab + 3. Paste in your OpenAPI document + 4. Click "Validate it!" + + If the text box that appears below has a green border, your document has passed validation. + This is the exact validation package Mintlify uses to validate OpenAPI documents, so if your document + passes validation here, there's a great chance the problem is elsewhere. + + Additionally, Mintlify does not support OpenAPI 2.0. If your document uses this version of the specification, + you could encounter this issue. You can convert your document at [editor.swagger.io](https://editor.swagger.io/) (under Edit > Convert to OpenAPI 3): + + + + + + + + This is usually caused by a misspelled `openapi` field in the page metadata. Make sure + the HTTP method and path match the HTTP method and path in the OpenAPI document exactly. + + Here's an example of how things might go wrong: + + ```md get-user.mdx + --- + openapi: "GET /users/{id}/" + --- + ``` + + ```yaml openapi.yaml + paths: + "/users/{id}": + get: ... + ``` + + Notice that the path in the `openapi` field has a trailing slash, whereas the path in the OpenAPI + document does not. + + Another common issue is a misspelled filename. If you are specifying a particular OpenAPI document + in the `openapi` field, ensure the filename is correct. For example, if you have two OpenAPI + documents `openapi/v1.json` and `openapi/v2.json`, your metadata might look like this: + + ```md api-reference/v1/users/get-user.mdx + --- + openapi: "v1 GET /users/{id}" + --- + ``` + + + + If you have a custom domain configured, this could be an issue with your reverse proxy. By + default, requests made via the API Playground start with a `POST` request to the + `/api/request` path on the docs site. If your reverse proxy is configured to only allow `GET` + requests, then all of these requests will fail. To fix this, configure your reverse proxy to + allow `POST` requests to the `/api/request` path. + + Alternatively, if your reverse proxy prevents you from accepting `POST` requests, you can configure + Mintlify to send requests directly to your backend with the `api.playground.disableProxy` + setting in the `docs.json`, as described [here](/settings/global#api-configurations). This will + likely require you to configure CORS on your server, as these requests will now come directly + from your users' browsers. + + \ No newline at end of file diff --git a/cloud/assets/grouping.mdx b/cloud/assets/grouping.mdx new file mode 100644 index 0000000..a70c91d --- /dev/null +++ b/cloud/assets/grouping.mdx @@ -0,0 +1,63 @@ +--- +title: "Dynamic Asset Grouping" +description: "Create and manage filtered asset groups for targeted visibility" +sidebarTitle: "Dynamic Grouping" +--- + +Dynamic asset grouping allows you to save and organize filtered asset views, making it easier to focus on specific subsets of your infrastructure. By creating saved filter combinations, security teams can quickly access the assets that matter most to them without having to reapply complex filter conditions each time. + + +Dynamic groups source their data from parent asset discoveries and cannot be rescanned or refreshed independently. Any updates to the underlying assets will automatically reflect in the dynamic groups. + + +## Creating Dynamic Asset Groups + +1. Navigate to your discovered **Assets groups** +2. Apply filters to your assets using the filter option (e.g., by label, technology, port, status) +3. Once you've created the desired view, click **Save filters** +4. Enter a descriptive name +5. Click **Save** to create your dynamic group + + + ![Creating a dynamic asset group](/images/assets-dynamic-group-create.png) + + +Your saved dynamic groups will appear in the same [list as your assets](https://cloud.projectdiscovery.io/assets). From here you can: + +- **Access groups:** Click any group name to instantly view that filtered subset +- **Edit groups:** Hover over a group name and click the edit icon to modify the filters +- **Delete groups:** Remove groups that are no longer needed via the group settings menu +- **Share groups:** Generate shareable links for specific groups (Enterprise plan only) + +## Use Cases and Best Practices + +Dynamic asset groups offer versatile applications across your organization's security workflows. Create team-specific views (DevOps focusing on cloud technologies, Security teams monitoring vulnerabilities, Compliance teams tracking regulated systems), environment-specific groups (production, development, third-party integrations), and security-priority filters (critical infrastructure, public-facing systems, legacy technologies). For optimal results, maintain descriptive naming conventions, document each group's purpose, regularly review and update as infrastructure evolves, limit group quantity to maintain focus, and combine with custom labels for more powerful filtering. This approach streamlines asset management while providing targeted visibility where it matters most. + +## Limitations + +- Dynamic groups cannot be targeted for independent rescans +- The results in a dynamic group will always reflect the most recent state of the parent discovery +- Filter conditions apply only to discovered attributes - custom data cannot be used for filtering + + +When the parent asset discovery is updated or rescanned, all associated dynamic groups will automatically reflect the new data without any manual intervention required. + + +## Related Features + + + + Learn how to use labels to better organize and filter your assets + + + Create and apply custom labels to enhance your grouping strategy + + \ No newline at end of file diff --git a/cloud/integrations.mdx b/cloud/integrations.mdx index 7eaacbd..b88b22a 100644 --- a/cloud/integrations.mdx +++ b/cloud/integrations.mdx @@ -160,8 +160,7 @@ For detailed API documentation, refer to the [Linear API Documentation](https:// ## Cloud Asset Discovery -ProjectDiscovery leverages our open-source [Cloudlist](https://github.com/projectdiscovery/cloudlist) technology to provide comprehensive cloud asset discovery and management through a simple web interface. - +ProjectDiscovery supports integrations with all popular cloud providers to automatically sync externally facing hosts for vulnerability scanning. This comprehensive approach ensures all your cloud resources with external exposure are continuously monitored, complementing our external discovery capabilities. The result is complete visibility of your attack surface across cloud environments through a simple web interface. ### AWS (Amazon Web Services) ProjectDiscovery's AWS integration allows the platform to automatically discover and monitor cloud assets across your AWS accounts. By connecting AWS to ProjectDiscovery, security teams and DevOps engineers gain continuous visibility into EC2 instances, S3 buckets, DNS records, and other resources without manual inventory. This integration leverages ProjectDiscovery's open-source **Cloudlist** engine to enumerate assets via AWS APIs. In short, it helps ensure no cloud asset goes unnoticed, enabling proactive security monitoring and easier management of your attack surface. diff --git a/cloud/scanning/internal-scan.mdx b/cloud/scanning/internal-scan.mdx index 125a9a1..790f1c8 100644 --- a/cloud/scanning/internal-scan.mdx +++ b/cloud/scanning/internal-scan.mdx @@ -8,12 +8,12 @@ Internal network security is critical yet often overlooked. Once attackers gain ProjectDiscovery offers two distinct approaches for internal network vulnerability scanning, each designed to fit different organizational needs while maintaining our core focus on exploitability and accurate detection. - - Run Nuclei locally and upload results to PDCP. Ideal for teams with existing scanning workflows or specific network restrictions. - Use TunnelX for remote scan triggering through PDCP UI. Perfect for large networks and centralized security management. + + Run Nuclei locally and upload results to PDCP. Ideal for teams with existing scanning workflows or specific network restrictions. + @@ -43,55 +43,7 @@ The discovered ports can be used as input for vulnerability scanning to ensure t Naabu will soon be integrated directly into ProjectDiscovery's internal vulnerability scanning capabilities. Contact our [sales team](https://projectdiscovery.io/request-demo) to be notified when this feature becomes available. -## Method 1: Local Scanning & Upload - -This approach lets you run Nuclei locally and upload results to ProjectDiscovery Cloud Platform (ProjectDiscovery). - -### Set up your API Key - -To connect your existing Nuclei results to PDCP you will need to create a free API Key: - -1. Visit https://cloud.projectdiscovery.io -2. Open the setting menu from the top right and select "API Key" to create your API Key -PDCP API Key -3. Use the `nuclei -auth` command, and enter your API key when prompted. - -### Configure Team (Optional) - -If you want to upload the scan results to a team workspace instead of your personal workspace, you can configure the Team ID using either method: - -- **Obtain Team ID:** - - Navigate to [https://cloud.projectdiscovery.io/settings/team](https://cloud.projectdiscovery.io/settings/team) - - Copy the Team ID from the top right section - Obtain Team ID - -- **CLI Option:** - ```bash - nuclei -tid XXXXXX -cloud-upload - ``` - -- **ENV Variable:** - ```bash - export PDCP_TEAM_ID=XXXXX - ``` - -2. Run your scan with the upload flag: -```bash -# Single target -nuclei -u http://internal-target -cloud-upload - -# Multiple targets -nuclei -l internal-hosts.txt -cloud-upload - -# With specific templates -nuclei -u http://internal-target -t misconfiguration/ -cloud-upload -``` - - -This method is ideal when you want to maintain complete control over scan execution or integrate with existing automation scripts. - - -## Method 2: Cloud-Managed Scanning (Recommended) +## Method 1: Cloud-Managed Scanning (Recommended) [TunnelX](https://github.com/projectdiscovery/tunnelx) is our open-source tunneling solution, purpose-built by ProjectDiscovery to enable secure internal scanning. It establishes isolated SOCKS5 proxies that let you trigger scans directly from the ProjectDiscovery interface while ensuring your internal infrastructure remains protected and unexposed. @@ -176,3 +128,51 @@ TunnelX connections appear automatically in the dropdown once properly configure **Important:** Only input targets that are actually accessible from the machine where TunnelX is running. For example, if TunnelX is deployed on a server in your internal network (192.168.1.0/24), it can only scan hosts within that network or other networks it has routing access to. If you input external targets or internal hosts that the TunnelX machine cannot reach, the scan will fail. + +## Method 2: Local Scanning & Upload + +This approach lets you run Nuclei locally and upload results to ProjectDiscovery Cloud Platform (ProjectDiscovery). + +### Set up your API Key + +To connect your existing Nuclei results to PDCP you will need to create a free API Key: + +1. Visit https://cloud.projectdiscovery.io +2. Open the setting menu from the top right and select "API Key" to create your API Key +PDCP API Key +3. Use the `nuclei -auth` command, and enter your API key when prompted. + +### Configure Team (Optional) + +If you want to upload the scan results to a team workspace instead of your personal workspace, you can configure the Team ID using either method: + +- **Obtain Team ID:** + - Navigate to [https://cloud.projectdiscovery.io/settings/team](https://cloud.projectdiscovery.io/settings/team) + - Copy the Team ID from the top right section + Obtain Team ID + +- **CLI Option:** + ```bash + nuclei -tid XXXXXX -cloud-upload + ``` + +- **ENV Variable:** + ```bash + export PDCP_TEAM_ID=XXXXX + ``` + +2. Run your scan with the upload flag: +```bash +# Single target +nuclei -u http://internal-target -cloud-upload + +# Multiple targets +nuclei -l internal-hosts.txt -cloud-upload + +# With specific templates +nuclei -u http://internal-target -t misconfiguration/ -cloud-upload +``` + + +This method is ideal when you want to maintain complete control over scan execution or integrate with existing automation scripts. + diff --git a/home.mdx b/home.mdx index c11437e..b4aa2a6 100644 --- a/home.mdx +++ b/home.mdx @@ -10,10 +10,6 @@ searchable: false
-

- Security at Scale -

-

Tap into the Future of Security Workflows

diff --git a/images/assets-dynamic-group-create.png b/images/assets-dynamic-group-create.png new file mode 100644 index 0000000..43a9b4c Binary files /dev/null and b/images/assets-dynamic-group-create.png differ diff --git a/mint.json b/mint.json index d7582c9..44ed078 100644 --- a/mint.json +++ b/mint.json @@ -392,6 +392,7 @@ "cloud/assets/screenshots", "cloud/assets/labeling", "cloud/assets/custom-labeling", + "cloud/assets/grouping", "cloud/assets/subsidiary" ] },