A minimal, function-driven AI coding assistant built for speed and simplicity.
Lambda is a lightweight, command-line AI coding agent driven by Google's Gemini models. Unlike massive IDE extensions or bloated web setups, Lambda lives right in your terminal. It uses a ReAct (Reasoning and Acting) loop to autonomously navigate your codebase, read and write files, run shell commands, and orchestrate complex coding tasks from a single prompt.
With a beautiful UI powered by Rich, Lambda makes pair programming with AI feel fast, natural, and highly contextual.
- Autonomous Tool Execution: Powered by Gemini's function calling, Lambda can
read_file,write_file,search_repo, andrun_commanddirectly on your host machine to get things done. - Parallel Sub-Agents: Delegate independent tasks (like extensive code analysis or small edits) to parallel background threads using
dispatch_subagent. - Agentic Scratchpad: Lambda uses a hidden local scratchpad (
.scratchpad/) to draft implementation plans, think through complex logic, and maintain context across long execution chains. - Stunning CLI Experience: Built with Rich, featuring distinct conversational bubbles, syntax highlighting, active token monitoring, and beautiful live spinners.
- Hot-Swappable Models: Instantly switch between different Gemini models mid-conversation using the
/modelsslash command. - Zero-Friction Configuration: Global configurations (
~/.config/lambda-agent/config.env) mean you can runlambdain any directory on your machine instantly.
Requires Python 3.10+. Install Lambda directly from PyPI:
pip install lambda-agentFor local development, clone the repository and run pip install -e . instead.
Spin up the agent from any directory simply by running:
lambdaOn your first run, Lambda will securely prompt you for your Gemini API Key and model preference. This is saved to ~/.config/lambda-agent/config.env.
Note: You can override global settings by placing a .env file in your specific project directory.
During your interactive session, you can use the following commands:
/models— Display a menu to hot-swap your active AI model (e.g., from Gemini Flash to Pro)./config— Quickly update your API key mid-session./help— List all available slash commands.exitorquit— End the session and review your total token usage.
Lambda acts autonomously using an extensible set of Python tools:
search_repo(query, path): Deep file inspection ignoring.git,.venv, and binary caches.run_command(command): Real shell execution (with 30s timeout guards).dispatch_subagent(task): Parallelize isolated tasks via lightweight background Gemini sessions.ask_user(question): Ability to explicitly pause and ask the human for clarification.read_file,write_file: Direct file manipulations.- Scratchpad API:
read_scratchpad,write_scratchpad,append_scratchpadfor planning.
Contributions make the open-source community an amazing place to learn and build!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature) - Commit your Changes (
git commit -m 'Add some AmazingFeature') - Push to the Branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Distributed under the Apache 2.0 License. See LICENSE for more information.
- Engine powered by Google GenAI SDK.
- Lambda icon by shohanur.rahman13 from Flaticon

