A comprehensive sustainability platform for AI development with environmental tracking, multi-user support, and cloud deployment capabilities.
NeuroGreen is an environmental monitoring platform that tracks carbon emissions, energy consumption, and water usage for AI workloads. It provides real-time monitoring, interactive visualizations, multi-user collaboration, and cloud deployment capabilities.
- ✅ Environmental Tracking - Carbon emissions, energy consumption, water usage
- ✅ Interactive Visualizations - Real-time graphs and charts
- ✅ AI-Powered Recommendations - Intelligent analysis and optimization suggestions
- ✅ Multi-User Platform - User authentication, organizations, team collaboration
- ✅ Cloud Deployment Ready - Docker, Heroku, AWS, Google Cloud, Azure
- ✅ Real-time Monitoring - Live tracking during AI workloads
- ✅ Regional Analysis - Environmental impact by geographic region
- ✅ Export & Notifications - CSV, PDF, Excel exports with email/Slack integration
- Python 3.8+
- Docker and Docker Compose (for multi-user platform)
- PostgreSQL (for multi-user platform)
- Redis (for multi-user platform)
-
Clone the repository
git clone <repository-url> cd GreenAI
-
Create virtual environment
python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Configure environment
cp env.example .env # Edit .env with your API keys and configuration
streamlit run app.pyAccess at: http://localhost:8501
# Start with Docker Compose
docker-compose up -d
# Or run locally (requires PostgreSQL and Redis)
streamlit run app.pyAccess at: http://localhost:8501
- Carbon Emissions - Real-time CO₂ monitoring with regional carbon intensity factors
- Energy Consumption - Hardware-specific power models and utilization-based calculations
- Water Usage - Regional water intensity factors and cloud provider specific data
- Interactive Visualizations - Real-time charts and graphs with tabbed interface
- Intelligent Analysis - Behavior pattern recognition and efficiency optimization
- LLM Chat Interface - Natural language queries about environmental optimization
- Smart Recommendations - Prioritized suggestions with impact estimation
- User Authentication - Email/password and OAuth (Google, GitHub)
- Team Collaboration - Organization management with role-based access control
- Project Management - Shared workspaces and project history
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ AI Workloads │───▶│ Enhanced Monitor │───▶│ Multi-User │
│ (PyTorch/TF) │ │ (Carbon+Energy+ │ │ Platform │
│ │ │ Water Tracking) │ │ (Auth+Teams) │
└─────────────────┘ └──────────────────┘ └─────────────────┘
│ │
▼ ▼
┌──────────────────┐ ┌─────────────────┐
│ Regional Factors │ │ Interactive │
│ (Water Intensity │ │ Visualizations │
│ Carbon Grid) │ │ (Tabs+Charts) │
└──────────────────┘ └─────────────────┘
GreenAI/
├── app.py # Main Streamlit application
├── src/ # Core source code
│ ├── monitoring/ # Carbon tracking modules
│ ├── api/ # API integrations
│ ├── recommendations/ # AI recommendation engine
│ ├── analytics/ # Analytics and comparison
│ └── cloud/ # Cloud provider integrations
├── config/ # Configuration files
│ └── settings.py # Application settings
├── docs/ # Documentation
├── requirements.txt # Python dependencies
├── setup.py # Package setup
├── Dockerfile # Docker configuration
├── docker-compose.yml # Docker Compose setup
├── init.sql # Database schema
└── README.md # This file
Create a .env file from env.example:
# API Keys
ELECTRICITY_MAP_API_KEY=your_key_here
WATT_TIME_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here
# Database
DATABASE_URL=postgresql://user:password@host:port/database
# Cloud Providers
AWS_REGION=us-east-1
GOOGLE_CLOUD_PROJECT=your-project-id
# Notifications
SLACK_WEBHOOK_URL=your_webhook_url
EMAIL_NOTIFICATIONS=falsedocker-compose up -ddocker build -t greenai .
docker run -p 8501:8501 greenaifrom src.monitoring.carbon_tracker import CarbonTracker
tracker = CarbonTracker("My AI Project")
tracker.start_tracking("training", "pytorch")
# Your ML code here...
metrics = tracker.stop_tracking()
print(f"CO₂: {metrics.carbon_emissions:.6f} kg")from src.monitoring.carbon_tracker import CarbonTracker
tracker = CarbonTracker(
project_name="My Project",
region="us-west-2",
cloud_provider="aws"
)
hardware_specs = {
'cpu_type': 'apple_m2',
'gpu_type': 'rtx_4090',
'cpu_utilization': 0.7,
'gpu_utilization': 0.9
}
session_id = tracker.start_tracking(
workload_type="training",
framework="pytorch",
hardware_specs=hardware_specs
)
# Your ML code here...
metrics = tracker.stop_tracking()
print(f"Energy: {metrics.energy_consumed:.6f} kWh")
print(f"Water: {metrics.water_usage:.2f} L")
print(f"CO₂: {metrics.carbon_emissions:.6f} kg")heroku create greenai-app
heroku addons:create heroku-postgresql:hobby-dev
heroku addons:create heroku-redis:hobby-dev
git push heroku mainaws ecs create-cluster --cluster-name greenai-cluster
aws ecs create-service --cluster greenai-cluster --service-name greenai-servicegcloud builds submit --tag gcr.io/your-project/greenai
gcloud run deploy greenai --image gcr.io/your-project/greenai# Run tests
pytest
# With coverage
pytest --cov=src tests/- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
MIT License - see LICENSE file for details.
NeuroGreen enables:
- Accurate tracking of carbon, energy, and water footprints
- Regional optimization for environmental efficiency
- Hardware efficiency analysis and recommendations
- Comprehensive reporting for sustainability goals
🌱 Built with ❤️ for the environment • Making AI Development Sustainable