This project focuses on optimizing demand response (DR) strategies for microgrids using a combination of Mixed-Integer Linear Programming (MILP) and Generalized Linear Model Upper Confidence Bound (GLM-UCB) Bandits. The objective is to maximize system operator revenue by incentivizing buildings to reduce energy demand during peak periods or when renewable generation is insufficient, thereby minimizing the use of costly diesel generation.
- Objective: Find the optimal set of incentives and diesel generation levels to balance demand and supply while minimizing costs.
- Features:
- Enforces power balance constraints.
- Uses step and sigmoid functions to model incentive acceptance probabilities.
- Serves as an upper bound for revenue.
- Objective: Learn optimal incentive levels dynamically over time without prior knowledge of building behaviors.
- Features:
- Incorporates contextual information (e.g., energy demand, price).
- Adjusts incentives using a multi-bandit framework, treating each building as an individual bandit.
- Updates Q-values and confidence bounds based on observed rewards.
- Provides a naive approach by greedily offering incentives to reduce demand during shortfalls.
The dataset includes:
- Building Load: Simulated profiles from NREL ResStock Dataset.
- Renewable Generation: Simulated using NREL SAM Advisor tool.
- Pricing Data: NYISO Real-Time Market (RTM) prices for 2019.
- Python 3.8 or higher
- Required Python libraries:
numpypandascvxpymatplotlibstatsmodels
Install the dependencies using:
pip install numpy pandas cvxpy matplotlib statsmodels| File Name | Description |
|---|---|
data_new.csv |
Dataset with energy demand, renewable generation, and prices. |
bandits_initial_half.ipynb |
Implementation of GLM-UCB Bandits for the first half of the year. |
bandits_later_half.ipynb |
Continuation of Bandits implementation for the latter half of the year. |
milp_experiments_initial_half.ipynb |
MILP optimization for the first half of the year. |
milp_experiments_later_half.ipynb |
MILP optimization for the latter half of the year. |
Output_initial_half_MILP.csv |
MILP results for the first half of the year. |
Output_later_half_MILP.csv |
MILP results for the latter half of the year. |
Demand_Response_Project_Report.pdf |
Document detailing methodologies and approaches. |
-
Objective Function:
$\text{Maximize} \quad \sum_{i=1}^n \left[P_t (D_{it} - E[\Delta D_{it}]) - I_{it}E[\Delta D_{it}]\right] - C B_t$ -
Constraints:
-
Power balance equation:
$\sum_{i=1}^n (D_{it} - E[\Delta D_{it}]) = G_t + B_t$ -
Incentive limits:
$0 \leq I_{it} \leq I_{\text{max}}^t$
-
-
Reward Calculation:
$R_k = P_t(D_{it} - \theta_k \Delta D_{it}) - \theta_k I_{it} \Delta D_{it}$ -
UCB Calculation:
$UCB_k = Q_k + U_k - \text{Penalty Term}$
-
No-Incentive Scenario:
- Revenue:
$0.42M$ (over 6 months). - High diesel generation costs due to lack of demand reduction.
- Revenue:
-
Heuristic Baseline:
- 16.20% improvement over the no-incentive scenario.
-
Bandits:
- 2% improvement over the heuristic baseline.
- Slightly disadvantaged due to discrete incentives but learns parameters dynamically.
- MILP:
- Power Balance Over Time: Stacked bar plot of renewable and diesel generation with line plots for total and reduced demand.
- Revenue Comparisons: Line plots comparing revenues across different methods.
- Decision Validation: Plots validating power balance constraints for MILP and Bandits.
- Clone the repository and navigate to the directory.
- Open the respective Jupyter notebooks:
- For Bandits:
bandits_initial_half.ipynb,bandits_later_half.ipynb. - For MILP:
milp_experiments_initial_half.ipynb,milp_experiments_later_half.ipynb.
- For Bandits:
- Run the cells sequentially to observe results and visualizations.

