Find interesting Amazon S3 Buckets by watching certificate transparency logs.
-
Updated
Dec 7, 2022 - Python
Find interesting Amazon S3 Buckets by watching certificate transparency logs.
Python examples on AWS (Amazon Web Services) using AWS SDK for Python (Boto3). How to manage EC2 instances, Lambda Functions, S3 buckets, etc.
Amazon S3 Find and Forget is a solution to handle data erasure requests from data lakes stored on Amazon S3, for example, pursuant to the European General Data Protection Regulation (GDPR)
Deploy Generative AI models from Amazon SageMaker JumpStart using AWS CDK
Managed ELKK stack implemented with the AWS CDK
This project delivers AWS CDK Python code to provision serverless infrastructure in AWS Cloud to run Open Source RStudio Server and Shiny.
A boilerplate solution for processing image and PDF documents for regulated industries, with lineage and pipeline operations metadata services.
🏰 A Python script for AWS S3 bucket enumeration.
ComfyS3 seamlessly integrates with Amazon S3 in ComfyUI. This open-source project provides custom nodes for effortless loading and saving of images, videos, and checkpoint models directly from S3 buckets within the ComfyUI graph interface.
Multi-cloud infrastructure inventory and management tool, supporting AWS, Google Cloud, Azure, Oracle Cloud, Rackspace Cloud, Hetzner Cloud, Alibaba Cloud, e24cloud.com, Linode, Cloudflare, GoDaddy and Backblaze B2.
ACK is an E(T)L tool specialized in API data ingestion. It is accessible through a Command-Line Interface. The application allows you to easily extract, stream and load data (with minimum transformations), from the API source to the destination of your choice.
CSV Manager for AWS Security Hub exports SecurityHub findings to a CSV file and allows you to mass-update SecurityHub findings by modifying that CSV file.
Video transcriber and translator using Amazon Translate, Transcribe and Polly
Explore the power of cloud services for your machine learning and artificial intelligence projects
S3Zilla... an S3 File Transfer Client with a GUI developed using Tkinter
An End-to-End ETL data pipeline that leverages pyspark parallel processing to process about 25 million rows of data coming from a SaaS application using Apache Airflow as an orchestration tool and various data warehouse technologies and finally using Apache Superset to connect to DWH for generating BI dashboards for weekly reports
Add a description, image, and links to the amazon-s3 topic page so that developers can more easily learn about it.
To associate your repository with the amazon-s3 topic, visit your repo's landing page and select "manage topics."