Main idea:
This code defines a Python project "quick-doc-py" that automates generating documentation for a codebase by scanning source files, creating prompts describing their content, querying GPT-based AI models to produce explanations, and compiling those into readable documents. It supports ignoring certain files or folders, customizing the language, and testing various AI providers.
Easy example of usage in English:
Suppose you have a Python project in folder /my_project and want to create documentation describing what each Python file does with example usage. You run this tool like this in the command line:
python quick_doc_py/manage.py --root_dir /my_project --ignore "*__init__.py,*__pycache__" --lang en
The tool will:
- Recursively read all files in
/my_projectexcept those ignored (like__init__.py) - Generate prompts to ask GPT to explain each file's main idea and usage example
- Collect GPT's answers and compile a full documentation markdown file at
/my_project/documentation.md
This helps you quickly get meaningful, human-readable docs for your code without writing them manually.
The .gitignore file specifies untracked files that Git should ignore in the repository. In this case, it contains a single entry:
/__pycache__
This means Git will ignore the __pycache__ directory at the root of the repository.
__pycache__directories are automatically generated by Python to store bytecode-compiled versions of modules for faster imports.- Ignoring this folder helps keep the repository clean by excluding these generated files that do not need to be version-controlled.
Just include this .gitignore file in your repository root. Git will then skip tracking any files or folders inside the __pycache__ directory, avoiding clutter and unnecessary commits of cache files.
Explore AI-based language learning and communication solutions with Talkpal. Learn more
The .LICENSE file contains the text for the MIT License used by the project. Here is a brief documentation summary and explanation of this license:
The MIT License is a permissive open-source license that allows free use, modification, distribution, and private or commercial use of the software with very few restrictions.
- Permission Granted: Anyone can use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the software freely.
- Conditions: The copyright notice and permission notice must be included in all copies or substantial portions of the software.
- Disclaimer: The software is provided "as is", without warranty of any kind, either express or implied, including warranties of merchantability, fitness for a particular purpose, or noninfringement.
- Liability: The authors or copyright holders are not liable for any claims, damages, or other liabilities arising from the software's use.
If you include this software in your project, you can freely modify and redistribute it, but you must keep this license notice in the redistributed versions.
If you want me to generate a fully formatted README-style documentation or examples about using this license or related files, just say!
Certainly! Here is documentation in English for the .pyproject.toml file based on the content you shared:
This configuration file uses the Poetry tool format for managing the Python project named quick-doc-py. Poetry is a dependency management and packaging tool.
-
name:
quick-doc-py
The name of the project. -
version:
1.1.9
Current version of the project. -
description:
"This code can make documentation for your project" — a brief explanation of the project's purpose. -
authors:
List containing"Dmytro <sinica911@gmail.com>"— the author of the project. -
readme:
Points to the"README.md"file, which typically contains the project's detailed description. -
packages:
Includes the Python package namedquick_doc_pythat contains the project code. -
license:
The project is released under the MIT License. -
repository:
A URL to the GitHub repository:
https://github.com/Drag-GameStudio/Quick-Documentation.
Defines command-line interface scripts exposed via Poetry:
gen-doc: calls the functionmaininside the modulequick_doc_py.manage.providers-test: calls the functionmaininside the modulequick_doc_py.providers_test.
This allows running commands like poetry run gen-doc.
Specifies the required dependencies for the project:
- Python version must be 3.10 or later:
python = "^3.10". colorama = "^0.4.6": For ANSI color formatting in terminals.requests = "^2.32.3": For making HTTP requests.g4f = "^0.4.0.4": Likely a package related to GPT or AI functionality.
- Uses
poetry-coreas the build backend, defined under the[build-system]section to support packaging and installation.
In summary, this file fully configures the quick-doc-py Python project with metadata, dependencies, script commands, and build system details suitable for modern Python packaging and project management using Poetry.
Sure! Here's the documentation for the file quick_doc_py\manage.py in English:
This module provides functionality to generate documentation for a Python project by reading source code files, creating prompts for a GPT-based language model, and generating human-readable documentation. It manages the process of reading a codebase, generating both main idea summaries and detailed file-level documentation, and saving the combined results to a Markdown file.
Responsible for orchestrating the reading of the project folder, generating prompts for documentation, invoking the GPT model to generate text content, and saving the results.
- Parameters:
root_dir(str): The root directory of the project to document.ignore_files(list, optional): List of filenames or patterns to ignore when reading files.language(str, optional): Language in which documentation should be written (default: English).
Initializes the DocWriter with the project directory, files to ignore, and target language.
Reads the files in the project directory (ignoring specified files). It utilizes the Reader class from read_folder to load source code contents for further processing.
Returns a dictionary of {file_path: content} representing all read files, which will be used as prompts for documentation generation.
Generates a main summary and example usage for the entire codebase.
- Creates a main prompt describing the code.
- Invokes GPT to get a human-readable summary.
- Returns the generated main documentation text.
Asynchronously generates answers for a list of prompts using GPT.
- Parameters:
prompts(list): A list of prompts to send concurrently.
Returns a list of generated answers after awaiting all async tasks.
Generates detailed documentation for each file based on specific prompts.
- Uses the
PromptGeneratorto obtain per-file prompts. - Calls GPT to generate a detailed docstring or explanation for each file.
- Shows a progress bar during the process.
- Returns concatenated detailed documentation for all files.
Combines the main documentation and deep file-level documentation into the complete documentation text.
- Returns the full combined documentation as a string.
Saves the documentation text to a file.
- Parameters:
doc(str): The documentation content to save.file_name(str): Target file path for saving the documentation.
Command-line interface entry point.
- Parses command-line arguments:
--root_dir: root directory of the project.--ignore: comma-separated list of files to ignore.--lang: language choice for documentation.
- Creates a
DocWriterwith the parameters. - Reads the folder, generates the full documentation, and saves it as
documentation.mdinside the root directory.
Run the script from the command line with appropriate arguments:
python manage.py --root_dir path/to/project --ignore *__init__.py,*__pycache__ --lang enThis will generate comprehensive documentation for the specified project directory, ignoring the specified files, and save it as documentation.md.
read_foldermodule for reading and filtering source files.gptandprompt_handlermodules for prompt creation and GPT completions.py_progresspackage for displaying progress bars.argparseandasynciofor CLI and asynchronous tasks.
If you want, I can also help clarify specific functions or how to use this code for your project!
Sure! Here's the documentation in English for the file quick_doc_py\providers_test.py based on the provided code:
This module tests the availability and responsiveness of different AI model providers supported by the g4f library. It includes utilities for running tests with timeouts, displaying progress bars, and formatting console text with colors. The main functionality revolves around detecting which providers successfully respond to a test prompt.
A decorator factory for running a function with a specified timeout. It runs the decorated function in a separate thread and returns None if the function does not finish within timeout seconds.
Initializes colorama for console text color support and provides a method get_text to colorize and style strings for colored terminal output.
Displays a progress bar in the console. The progress bar shows the percentage completed and a colored visual bar updated as testing progresses.
Main class that manages provider testing.
__init__(self, model_name: str): Initializes the tester with a specific model name (e.g., 'gpt-4').get_providers(self): Retrieves a list of available providers fromg4f.Providerdynamically and initializes a progress bar.test_provider(self, provider_name: str) -> tuple[bool, str]: Tests a single provider by invokingtest_provider_timeoutwith a 30-second timeout. Returns a tuple indicating if it is working and the received response or None.test_provider_timeout(self, provider): Decorated withtimeout_controlto enforce a 30-second timeout on testing the provider. It sends a simple "Hello" chat message to determine if the provider responds.test_providers(self): Iterates over all providers, running tests with progress feedback. Collects and returns a dictionary of providers that responded successfully.
Helper function to create a ProviderTest instance, retrieve providers, and run tests returning a dictionary of working providers.
CLI entry point:
- Parses
--name_modelcommand line argument specifying the model to test. - Runs provider tests and prints results.
Run as a script from the command line:
python providers_test.py --name_model gpt-4This command tests all providers for the gpt-4 model and displays providers that respond correctly.
If you'd like, I can also help you generate an example usage snippet or further explain parts of the code!
Unlock the power of data without complexity using Wren AI's conversational GenBI platform and AI-powered spreadsheets. Learn more
Here's the documentation for the file quick_doc_py\gpt\gpt.py:
This module provides a class GPT that serves as a client interface for interacting with various GPT-based language models via different providers. It simplifies making chat completions requests and managing retries in case of errors.
GPT(provider=PollinationsAI, gpt_model="gpt-4o")provider: The provider from theg4f.Providermodule to use for API requests. Defaults toPollinationsAI.gpt_model: The name of the GPT model to use. Defaults to"gpt-4o".
Sends a prompt to the model and retrieves the generated answer.
-
Parameters:
prompt(list of dict): A list of messages structured as dictionaries with"role"and"content"keys, representing the conversation history. -
Returns:
A string containing the model's reply. -
Behavior:
Attempts to get a completion response from the client. If an exception occurs (e.g., network issues), waits 5 seconds and retries recursively.
Sends a single user prompt to the model without providing prior conversation history.
-
Parameters:
prompt(str): A string representing the user's message. -
Returns:
A string containing the model's reply orNoneif the request fails.
from quick_doc_py.gpt.gpt import GPT
client = GPT()
response = client.get_answer([{"role": "user", "content": "Hello, how are you?"}])
print(response)This module abstracts provider and model details and handles basic retry logic for making chat completion requests with GPT models.
Here's the documentation for the file quick_doc_py\gpt\prompt_handler.py in English:
This module contains the PromptGenerator class, which is responsible for generating different types of prompts used to request documentation for code files. It allows you to create prompts tailored to either individual files or an overall summary for a set of code snippets.
PromptGenerator takes in raw code data and prepares prompts in a specified language to instruct a language model to generate documentation or code explanations.
PromptGenerator(prompt_data, language="en")- prompt_data (
dictor similar): A collection of code content keyed by file paths or identifiers. - language (
str, default"en"): The language in which the documentation should be written.
Generates a list of prompts, one per code file, asking for documentation in the specified language.
- Iterates through each file in
prompt_data. - For each file, creates a prompt string asking to write documentation for that file's code.
Returns:
list[str] - A list of prompt strings, one for each file.
Creates a single prompt that asks for the main idea and a simple usage example for the entire collection of code passed in through prompt_data.
Returns:
str - A prompt string requesting an overview and example usage in the specified language.
If you'd like, I can also help explain how to use this in a project or generate example prompts and expected model outputs.
Here's documentation for the code in quick_doc_py\reader\read_folder.py:
This module provides functionality to read files recursively from a directory, optionally ignoring specified files or folders. It collects the contents of the files into a dictionary keyed by file paths relative to the root directory.
Handles the reading and storage of file contents.
-
Attributes
files(dict): A dictionary storing the content of files. Keys are file names (relative paths), values are file contents.
-
Methods
-
read_file(file_path: str) -> str | NoneReads the content of a single file located at
file_path.Returns the file content as a string if successful, or
Noneif reading fails. -
add_file(file_path: str, file_name: str) -> NoneReads a file and adds its content to the
filesdictionary under the keyfile_name(typically the relative path).Does nothing if the file cannot be read.
-
get_all_prompt() -> dictReturns the current
filesdictionary containing all read files.
-
Manages recursively reading files from a root directory, respecting ignore rules for files and folders.
-
Constructor
-
__init__(path: str, ignore_files: list[str] = [])path: Root directory path to start reading files from.ignore_files: List of filenames or folder patterns to ignore. Wildcards are supported when the ignore string starts with*, matching exact file or folder names.
-
-
Methods
-
read_folder() -> FilesReaderWalks the directory tree starting from
self.path.Skips directories and files based on the ignore rules.
Adds non-ignored files to a
FilesReaderinstance and returns it. -
get_local_path(file_path: str) -> strComputes the path of
file_pathrelative to the rootself.path, ending with a backslash\. -
check_ignore_files(local_file_path: str, file_name: str) -> boolChecks if a file should be included by comparing against ignore patterns.
Supports ignoring exact matches or filenames with a leading wildcard (
*).Returns
Trueif the file should not be ignored (i.e., included),Falseotherwise. -
check_ignore_folders(local_folder_path: str) -> boolChecks if the folder path (relative to root) should be ignored based on ignore patterns.
Handles wildcards for folder names and full folder paths.
Returns
Trueif the folder should not be ignored,Falseotherwise.
-
from quick_doc_py.reader.read_folder import Reader
reader = Reader(
path=r"C:\MyProject",
ignore_files=["*__init__.py", "*__pycache__"]
)
files_reader = reader.read_folder()
all_files_content = files_reader.get_all_prompt()
# all_files_content is a dict mapping relative file paths to file contentsThis module is designed to help with collecting all source files (except ignored ones) and reading their contents into a structure useful for further processing, such as generating documentation or analysis.