diff --git a/.typos.toml b/.typos.toml index abd36cef..60cccd8d 100644 --- a/.typos.toml +++ b/.typos.toml @@ -50,6 +50,12 @@ preprocessor = "preprocessor" logits = "logits" analyse = "analyse" Labour = "Labour" +# Forecasting and statistical terms +MAPE = "MAPE" +mape = "mape" +yhat = "yhat" +yhat_lower = "yhat_lower" +yhat_upper = "yhat_upper" [default] locale = "en-us" diff --git a/README.md b/README.md index c7355648..5b0b87b8 100644 --- a/README.md +++ b/README.md @@ -71,6 +71,7 @@ etc. | [Huggingface to Sagemaker](huggingface-sagemaker) | 🚀 MLOps | 🔄 CI/CD, 📦 Deployment | mlflow, sagemaker, kubeflow | | [Databricks Production QA](databricks-production-qa-demo) | 🚀 MLOps | 📊 Monitoring, 🔍 Quality Assurance | databricks, evidently, shap | | [Eurorate Predictor](eurorate-predictor) | 📊 Data | ⏱️ Time Series, 🔄 ETL | airflow, bigquery, xgboost | +| [RetailForecast](retail-forecast) | 📊 Data | ⏱️ Time Series, 📈 Forecasting, 🔮 Multi-Model | prophet, zenml, pandas | # 💻 System Requirements diff --git a/retail-forecast/Dockerfile.codespace b/retail-forecast/Dockerfile.codespace new file mode 100644 index 00000000..df6a27a1 --- /dev/null +++ b/retail-forecast/Dockerfile.codespace @@ -0,0 +1,42 @@ +# Sandbox base image +FROM zenmldocker/zenml-sandbox:latest + +# Install uv from official distroless image +COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/ + +# Set uv environment variables for optimization +ENV UV_SYSTEM_PYTHON=1 +ENV UV_COMPILE_BYTECODE=1 + +# Project metadata +LABEL project_name="retail-forecast" +LABEL project_version="0.1.0" + +# Install dependencies with uv and cache optimization +RUN --mount=type=cache,target=/root/.cache/uv \ + uv pip install --system \ + "zenml>=0.82.0" \ + "numpy>=1.20.0" \ + "pandas>=1.3.0" \ + "matplotlib>=3.5.0" \ + "prophet>=1.1.0" \ + "typing_extensions>=4.0.0" \ + "pyarrow" \ + "fastparquet" \ + "plotly" \ + "notebook" + +# Set workspace directory +WORKDIR /workspace + +# Clone only the project directory and reorganize +RUN git clone --depth 1 https://github.com/zenml-io/zenml-projects.git /tmp/zenml-projects && \ + cp -r /tmp/zenml-projects/retail-forecast/* /workspace/ && \ + rm -rf /tmp/zenml-projects + +# VSCode settings +RUN mkdir -p /workspace/.vscode && \ + printf '{\n "workbench.colorTheme": "Default Dark Modern"\n}' > /workspace/.vscode/settings.json + +# Create assets directory for visualizations +RUN mkdir -p /workspace/assets \ No newline at end of file diff --git a/retail-forecast/README.md b/retail-forecast/README.md new file mode 100644 index 00000000..3915a47f --- /dev/null +++ b/retail-forecast/README.md @@ -0,0 +1,207 @@ +# RetailForecast: Production-Ready Sales Forecasting with ZenML and Prophet + +A robust MLOps pipeline for retail sales forecasting designed for retail data scientists and ML engineers. + +## 📊 Business Context + +In retail, accurate demand forecasting is critical for optimizing inventory, staff scheduling, and financial planning. This project provides a production-ready sales forecasting solution that can be immediately deployed in retail environments to: + +- Predict future sales volumes across multiple stores and products +- Capture seasonal patterns and trends in customer purchasing behavior +- Support data-driven inventory management and purchasing decisions +- Provide actionable insights through visual forecasting dashboards + +
+
+ Forecast Dashboard +
+

HTML dashboard visualization showing forecasts with uncertainty intervals

+
+ +## 🔍 Data Overview + +The pipeline works with time-series retail sales data structured as follows: + +| Field | Description | +|-------|-------------| +| date | Date of sales record (YYYY-MM-DD) | +| store | Store identifier (e.g., Store_1, Store_2) | +| item | Product identifier (e.g., Item_A, Item_B) | +| sales | Number of units sold | +| price | Unit price | + +The system automatically handles: +- Multiple store/item combinations as separate time series +- Train/test splitting for model validation +- Proper data transformations required by Prophet +- Missing value imputation and outlier detection + +
+
+ Data Visualization +
+

Interactive visualization of historical sales patterns

+
+ +## 🚀 Pipeline Architecture + +The project includes two primary pipelines: + +### 1. Training Pipeline + +The training pipeline performs the following steps: + +1. **Data Loading**: Imports historical sales data from CSV files +2. **Data Preprocessing**: + - Transforms data into Prophet-compatible format + - Creates separate time series for each store-item combination + - Performs train/test splitting based on configurable ratio +3. **Model Training**: + - Trains multiple Facebook Prophet models simultaneously, one for each store-item combination + - Configures seasonality parameters based on domain knowledge + - Handles price changes as regressors when available +4. **Model Evaluation**: + - Calculates MAPE, RMSE, and MAE metrics on test data + - Generates visual diagnostics for model performance +5. **Forecasting**: + - Produces forecasts with uncertainty intervals + - Creates interactive HTML visualizations + +
+
+ Training Pipeline DAG +
+

ZenML visualization of the training pipeline DAG

+
+ +### 2. Inference Pipeline + +The inference pipeline enables fast forecasting with pre-trained models: + +1. **Data Loading**: Imports the most recent sales data +2. **Data Preprocessing**: Transforms data into Prophet format +3. **Forecasting**: Generates predictions using production models +4. **Visualization**: Creates interactive dashboards with forecasts + +
+
+ Inference Pipeline DAG +
+

ZenML visualization of the inference pipeline DAG

+
+ +## 📈 Model Details + +The forecasting solution uses [Facebook Prophet](https://github.com/facebook/prophet), chosen specifically for its combination of accuracy and simplicity in retail forecasting scenarios: + +- **Multiple Models Approach**: Rather than a one-size-fits-all model, we generate individual Prophet models for each store-item combination, allowing forecasts that capture the unique patterns of each product in each location +- **Components**: Prophet automatically decomposes time series into trend, seasonality, and holidays +- **Seasonality**: Captures weekly, monthly, and yearly patterns in sales data +- **Special Events**: Handles holidays and promotions as custom seasonality effects +- **Uncertainty Estimation**: Provides prediction intervals for better inventory planning +- **Extensibility**: Supports additional regressors like price and marketing spend + +Prophet was selected for this solution because it excels at: +- Handling missing data and outliers common in retail sales data +- Automatically detecting seasonal patterns without extensive feature engineering +- Providing intuitive parameters that business users can understand +- Scaling to thousands of individual time series efficiently + + +## 💻 Technical Implementation + +The project leverages ZenML's MLOps framework to provide: + +- **Model Versioning**: Track all model versions and their performance metrics +- **Reproducibility**: All experiments are fully reproducible with tracked parameters +- **Pipeline Caching**: Speed up experimentation with intelligent caching of pipeline steps +- **Artifact Tracking**: All data and models are properly versioned and stored +- **Deployment Ready**: Models can be directly deployed to production environments + +A key innovation in this project is the custom ProphetMaterializer that enables serialization/deserialization of Prophet models for ZenML artifact storage. + +
+
+ ZenML Dashboard +
+

ZenML model registry tracking model versions and performance

+
+ +## 🛠️ Getting Started + +### Prerequisites + +- Python 3.9+ +- ZenML installed and configured + +### Installation + +```bash +# Clone the repository +git clone https://github.com/zenml-io/zenml-projects.git +cd zenml-projects/retail-forecast + +# Install dependencies +pip install -r requirements.txt + +# Initialize ZenML (if needed) +zenml init +``` + +### Running the Pipelines + +To train models and generate forecasts: + +```bash +# Run the training pipeline (default) +python run.py + +# Run with custom parameters +python run.py --forecast-periods 60 --test-size 0.3 --weekly-seasonality True +``` + +To make predictions using existing models: + +```bash +# Run the inference pipeline +python run.py --inference +``` + +### Viewing Results + +Start the ZenML dashboard: + +```bash +zenml login +``` + +Navigate to the dashboard to explore: +- Pipeline runs and their status +- Model performance metrics +- Interactive forecast visualizations +- Version history of all models + +## 🔄 Integration with Retail Systems + +This solution can be integrated with existing retail systems: + +- **Inventory Management**: Connect forecasts to automatic reordering systems +- **ERP Systems**: Feed forecasts into financial planning modules +- **BI Dashboards**: Export forecasts to Tableau, Power BI, or similar tools +- **Supply Chain**: Share forecasts with suppliers via API endpoints + +## 📊 Example Use Case: Store-Level Demand Planning + +A retail chain with 50 stores and 500 products uses this pipeline to: + +1. Train models on 2 years of historical sales data +2. Generate daily forecasts for the next 30 days for each store-item combination +3. Aggregate forecasts to support central purchasing decisions +4. Update models weekly with new sales data + +The result: 15% reduction in stockouts and 20% decrease in excess inventory. + + +## 📄 License + +This project is licensed under the Apache License 2.0. diff --git a/retail-forecast/assets/data_visualization.gif b/retail-forecast/assets/data_visualization.gif new file mode 100644 index 00000000..6f7045ba Binary files /dev/null and b/retail-forecast/assets/data_visualization.gif differ diff --git a/retail-forecast/assets/forecast_dashboard.png b/retail-forecast/assets/forecast_dashboard.png new file mode 100644 index 00000000..fc7d8867 Binary files /dev/null and b/retail-forecast/assets/forecast_dashboard.png differ diff --git a/retail-forecast/assets/inference_pipeline.png b/retail-forecast/assets/inference_pipeline.png new file mode 100644 index 00000000..e8cd4f97 Binary files /dev/null and b/retail-forecast/assets/inference_pipeline.png differ diff --git a/retail-forecast/assets/training_pipeline.png b/retail-forecast/assets/training_pipeline.png new file mode 100644 index 00000000..a2a99c33 Binary files /dev/null and b/retail-forecast/assets/training_pipeline.png differ diff --git a/retail-forecast/assets/zenml_dashboard.png b/retail-forecast/assets/zenml_dashboard.png new file mode 100644 index 00000000..7caaef39 Binary files /dev/null and b/retail-forecast/assets/zenml_dashboard.png differ diff --git a/retail-forecast/data/calendar.csv b/retail-forecast/data/calendar.csv new file mode 100644 index 00000000..eae2225c --- /dev/null +++ b/retail-forecast/data/calendar.csv @@ -0,0 +1,91 @@ +date,weekday,month,is_weekend,is_holiday,is_promo +2024-01-01,0,1,0,1,0 +2024-01-02,1,1,0,0,0 +2024-01-03,2,1,0,0,0 +2024-01-04,3,1,0,0,0 +2024-01-05,4,1,0,0,0 +2024-01-06,5,1,1,0,0 +2024-01-07,6,1,1,0,0 +2024-01-08,0,1,0,0,0 +2024-01-09,1,1,0,0,0 +2024-01-10,2,1,0,0,1 +2024-01-11,3,1,0,0,1 +2024-01-12,4,1,0,0,1 +2024-01-13,5,1,1,0,1 +2024-01-14,6,1,1,0,1 +2024-01-15,0,1,0,1,1 +2024-01-16,1,1,0,0,1 +2024-01-17,2,1,0,0,1 +2024-01-18,3,1,0,0,1 +2024-01-19,4,1,0,0,1 +2024-01-20,5,1,1,0,1 +2024-01-21,6,1,1,0,0 +2024-01-22,0,1,0,0,0 +2024-01-23,1,1,0,0,0 +2024-01-24,2,1,0,0,0 +2024-01-25,3,1,0,0,0 +2024-01-26,4,1,0,0,0 +2024-01-27,5,1,1,0,0 +2024-01-28,6,1,1,0,0 +2024-01-29,0,1,0,0,0 +2024-01-30,1,1,0,0,0 +2024-01-31,2,1,0,0,0 +2024-02-01,3,2,0,1,0 +2024-02-02,4,2,0,0,0 +2024-02-03,5,2,1,0,0 +2024-02-04,6,2,1,0,0 +2024-02-05,0,2,0,0,0 +2024-02-06,1,2,0,0,0 +2024-02-07,2,2,0,0,0 +2024-02-08,3,2,0,0,0 +2024-02-09,4,2,0,0,0 +2024-02-10,5,2,1,0,1 +2024-02-11,6,2,1,0,1 +2024-02-12,0,2,0,0,1 +2024-02-13,1,2,0,0,1 +2024-02-14,2,2,0,0,1 +2024-02-15,3,2,0,1,1 +2024-02-16,4,2,0,0,1 +2024-02-17,5,2,1,0,1 +2024-02-18,6,2,1,0,1 +2024-02-19,0,2,0,0,1 +2024-02-20,1,2,0,0,1 +2024-02-21,2,2,0,0,0 +2024-02-22,3,2,0,0,0 +2024-02-23,4,2,0,0,0 +2024-02-24,5,2,1,0,0 +2024-02-25,6,2,1,0,0 +2024-02-26,0,2,0,0,0 +2024-02-27,1,2,0,0,0 +2024-02-28,2,2,0,0,0 +2024-02-29,3,2,0,0,0 +2024-03-01,4,3,0,1,0 +2024-03-02,5,3,1,0,0 +2024-03-03,6,3,1,0,0 +2024-03-04,0,3,0,0,0 +2024-03-05,1,3,0,0,0 +2024-03-06,2,3,0,0,0 +2024-03-07,3,3,0,0,0 +2024-03-08,4,3,0,0,0 +2024-03-09,5,3,1,0,0 +2024-03-10,6,3,1,0,1 +2024-03-11,0,3,0,0,1 +2024-03-12,1,3,0,0,1 +2024-03-13,2,3,0,0,1 +2024-03-14,3,3,0,0,1 +2024-03-15,4,3,0,1,1 +2024-03-16,5,3,1,0,1 +2024-03-17,6,3,1,0,1 +2024-03-18,0,3,0,0,1 +2024-03-19,1,3,0,0,1 +2024-03-20,2,3,0,0,1 +2024-03-21,3,3,0,0,0 +2024-03-22,4,3,0,0,0 +2024-03-23,5,3,1,0,0 +2024-03-24,6,3,1,0,0 +2024-03-25,0,3,0,0,0 +2024-03-26,1,3,0,0,0 +2024-03-27,2,3,0,0,0 +2024-03-28,3,3,0,0,0 +2024-03-29,4,3,0,0,0 +2024-03-30,5,3,1,0,0 diff --git a/retail-forecast/data/sales.csv b/retail-forecast/data/sales.csv new file mode 100644 index 00000000..9c207555 --- /dev/null +++ b/retail-forecast/data/sales.csv @@ -0,0 +1,1351 @@ +date,store,item,sales,price +2024-01-01,Store_1,Item_A,41,12.0 +2024-01-01,Store_1,Item_B,30,10.0 +2024-01-01,Store_1,Item_C,35,10.0 +2024-01-01,Store_1,Item_D,28,7.0 +2024-01-01,Store_1,Item_E,20,7.0 +2024-01-01,Store_2,Item_A,23,12.0 +2024-01-01,Store_2,Item_B,27,10.0 +2024-01-01,Store_2,Item_C,24,10.0 +2024-01-01,Store_2,Item_D,13,7.0 +2024-01-01,Store_2,Item_E,16,7.0 +2024-01-01,Store_3,Item_A,18,12.0 +2024-01-01,Store_3,Item_B,15,10.0 +2024-01-01,Store_3,Item_C,17,10.0 +2024-01-01,Store_3,Item_D,7,7.0 +2024-01-01,Store_3,Item_E,7,7.0 +2024-01-02,Store_1,Item_A,17,12.0 +2024-01-02,Store_1,Item_B,12,10.0 +2024-01-02,Store_1,Item_C,17,10.0 +2024-01-02,Store_1,Item_D,9,7.0 +2024-01-02,Store_1,Item_E,8,7.0 +2024-01-02,Store_2,Item_A,16,12.0 +2024-01-02,Store_2,Item_B,10,10.0 +2024-01-02,Store_2,Item_C,10,10.0 +2024-01-02,Store_2,Item_D,5,7.0 +2024-01-02,Store_2,Item_E,6,7.0 +2024-01-02,Store_3,Item_A,10,12.0 +2024-01-02,Store_3,Item_B,6,10.0 +2024-01-02,Store_3,Item_C,9,10.0 +2024-01-02,Store_3,Item_D,5,7.0 +2024-01-02,Store_3,Item_E,5,7.0 +2024-01-03,Store_1,Item_A,17,12.0 +2024-01-03,Store_1,Item_B,23,10.0 +2024-01-03,Store_1,Item_C,16,10.0 +2024-01-03,Store_1,Item_D,9,7.0 +2024-01-03,Store_1,Item_E,13,7.0 +2024-01-03,Store_2,Item_A,10,12.0 +2024-01-03,Store_2,Item_B,11,10.0 +2024-01-03,Store_2,Item_C,6,10.0 +2024-01-03,Store_2,Item_D,5,7.0 +2024-01-03,Store_2,Item_E,8,7.0 +2024-01-03,Store_3,Item_A,12,12.0 +2024-01-03,Store_3,Item_B,9,10.0 +2024-01-03,Store_3,Item_C,8,10.0 +2024-01-03,Store_3,Item_D,5,7.0 +2024-01-03,Store_3,Item_E,4,7.0 +2024-01-04,Store_1,Item_A,17,12.0 +2024-01-04,Store_1,Item_B,15,10.0 +2024-01-04,Store_1,Item_C,21,10.0 +2024-01-04,Store_1,Item_D,13,7.0 +2024-01-04,Store_1,Item_E,7,7.0 +2024-01-04,Store_2,Item_A,14,12.0 +2024-01-04,Store_2,Item_B,10,10.0 +2024-01-04,Store_2,Item_C,10,10.0 +2024-01-04,Store_2,Item_D,9,7.0 +2024-01-04,Store_2,Item_E,9,7.0 +2024-01-04,Store_3,Item_A,13,12.0 +2024-01-04,Store_3,Item_B,7,10.0 +2024-01-04,Store_3,Item_C,8,10.0 +2024-01-04,Store_3,Item_D,6,7.0 +2024-01-04,Store_3,Item_E,7,7.0 +2024-01-05,Store_1,Item_A,19,12.0 +2024-01-05,Store_1,Item_B,17,10.0 +2024-01-05,Store_1,Item_C,13,10.0 +2024-01-05,Store_1,Item_D,9,7.0 +2024-01-05,Store_1,Item_E,14,7.0 +2024-01-05,Store_2,Item_A,18,12.0 +2024-01-05,Store_2,Item_B,11,10.0 +2024-01-05,Store_2,Item_C,14,10.0 +2024-01-05,Store_2,Item_D,8,7.0 +2024-01-05,Store_2,Item_E,7,7.0 +2024-01-05,Store_3,Item_A,12,12.0 +2024-01-05,Store_3,Item_B,12,10.0 +2024-01-05,Store_3,Item_C,9,10.0 +2024-01-05,Store_3,Item_D,8,7.0 +2024-01-05,Store_3,Item_E,3,7.0 +2024-01-06,Store_1,Item_A,37,12.0 +2024-01-06,Store_1,Item_B,27,10.0 +2024-01-06,Store_1,Item_C,25,10.0 +2024-01-06,Store_1,Item_D,19,7.0 +2024-01-06,Store_1,Item_E,11,7.0 +2024-01-06,Store_2,Item_A,20,12.0 +2024-01-06,Store_2,Item_B,19,10.0 +2024-01-06,Store_2,Item_C,23,10.0 +2024-01-06,Store_2,Item_D,11,7.0 +2024-01-06,Store_2,Item_E,10,7.0 +2024-01-06,Store_3,Item_A,15,12.0 +2024-01-06,Store_3,Item_B,17,10.0 +2024-01-06,Store_3,Item_C,15,10.0 +2024-01-06,Store_3,Item_D,9,7.0 +2024-01-06,Store_3,Item_E,11,7.0 +2024-01-07,Store_1,Item_A,33,12.0 +2024-01-07,Store_1,Item_B,32,10.0 +2024-01-07,Store_1,Item_C,23,10.0 +2024-01-07,Store_1,Item_D,17,7.0 +2024-01-07,Store_1,Item_E,17,7.0 +2024-01-07,Store_2,Item_A,15,12.0 +2024-01-07,Store_2,Item_B,19,10.0 +2024-01-07,Store_2,Item_C,19,10.0 +2024-01-07,Store_2,Item_D,12,7.0 +2024-01-07,Store_2,Item_E,12,7.0 +2024-01-07,Store_3,Item_A,12,12.0 +2024-01-07,Store_3,Item_B,13,10.0 +2024-01-07,Store_3,Item_C,13,10.0 +2024-01-07,Store_3,Item_D,8,7.0 +2024-01-07,Store_3,Item_E,9,7.0 +2024-01-08,Store_1,Item_A,23,12.0 +2024-01-08,Store_1,Item_B,25,10.0 +2024-01-08,Store_1,Item_C,19,10.0 +2024-01-08,Store_1,Item_D,13,7.0 +2024-01-08,Store_1,Item_E,12,7.0 +2024-01-08,Store_2,Item_A,9,12.0 +2024-01-08,Store_2,Item_B,12,10.0 +2024-01-08,Store_2,Item_C,12,10.0 +2024-01-08,Store_2,Item_D,12,7.0 +2024-01-08,Store_2,Item_E,8,7.0 +2024-01-08,Store_3,Item_A,12,12.0 +2024-01-08,Store_3,Item_B,9,10.0 +2024-01-08,Store_3,Item_C,7,10.0 +2024-01-08,Store_3,Item_D,8,7.0 +2024-01-08,Store_3,Item_E,7,7.0 +2024-01-09,Store_1,Item_A,25,12.0 +2024-01-09,Store_1,Item_B,14,10.0 +2024-01-09,Store_1,Item_C,23,10.0 +2024-01-09,Store_1,Item_D,9,7.0 +2024-01-09,Store_1,Item_E,14,7.0 +2024-01-09,Store_2,Item_A,21,12.0 +2024-01-09,Store_2,Item_B,9,10.0 +2024-01-09,Store_2,Item_C,10,10.0 +2024-01-09,Store_2,Item_D,8,7.0 +2024-01-09,Store_2,Item_E,7,7.0 +2024-01-09,Store_3,Item_A,8,12.0 +2024-01-09,Store_3,Item_B,9,10.0 +2024-01-09,Store_3,Item_C,7,10.0 +2024-01-09,Store_3,Item_D,7,7.0 +2024-01-09,Store_3,Item_E,5,7.0 +2024-01-10,Store_1,Item_A,51,10.8 +2024-01-10,Store_1,Item_B,27,9.0 +2024-01-10,Store_1,Item_C,30,9.0 +2024-01-10,Store_1,Item_D,26,6.3 +2024-01-10,Store_1,Item_E,17,6.3 +2024-01-10,Store_2,Item_A,27,10.8 +2024-01-10,Store_2,Item_B,27,9.0 +2024-01-10,Store_2,Item_C,14,9.0 +2024-01-10,Store_2,Item_D,15,6.3 +2024-01-10,Store_2,Item_E,15,6.3 +2024-01-10,Store_3,Item_A,24,10.8 +2024-01-10,Store_3,Item_B,13,9.0 +2024-01-10,Store_3,Item_C,12,9.0 +2024-01-10,Store_3,Item_D,13,6.3 +2024-01-10,Store_3,Item_E,12,6.3 +2024-01-11,Store_1,Item_A,40,10.8 +2024-01-11,Store_1,Item_B,34,9.0 +2024-01-11,Store_1,Item_C,27,9.0 +2024-01-11,Store_1,Item_D,23,6.3 +2024-01-11,Store_1,Item_E,23,6.3 +2024-01-11,Store_2,Item_A,21,10.8 +2024-01-11,Store_2,Item_B,29,9.0 +2024-01-11,Store_2,Item_C,23,9.0 +2024-01-11,Store_2,Item_D,11,6.3 +2024-01-11,Store_2,Item_E,16,6.3 +2024-01-11,Store_3,Item_A,16,10.8 +2024-01-11,Store_3,Item_B,19,9.0 +2024-01-11,Store_3,Item_C,20,9.0 +2024-01-11,Store_3,Item_D,9,6.3 +2024-01-11,Store_3,Item_E,14,6.3 +2024-01-12,Store_1,Item_A,40,10.8 +2024-01-12,Store_1,Item_B,36,9.0 +2024-01-12,Store_1,Item_C,42,9.0 +2024-01-12,Store_1,Item_D,20,6.3 +2024-01-12,Store_1,Item_E,18,6.3 +2024-01-12,Store_2,Item_A,20,10.8 +2024-01-12,Store_2,Item_B,17,9.0 +2024-01-12,Store_2,Item_C,20,9.0 +2024-01-12,Store_2,Item_D,15,6.3 +2024-01-12,Store_2,Item_E,15,6.3 +2024-01-12,Store_3,Item_A,23,10.8 +2024-01-12,Store_3,Item_B,16,9.0 +2024-01-12,Store_3,Item_C,21,9.0 +2024-01-12,Store_3,Item_D,11,6.3 +2024-01-12,Store_3,Item_E,17,6.3 +2024-01-13,Store_1,Item_A,61,10.8 +2024-01-13,Store_1,Item_B,37,9.0 +2024-01-13,Store_1,Item_C,35,9.0 +2024-01-13,Store_1,Item_D,34,6.3 +2024-01-13,Store_1,Item_E,30,6.3 +2024-01-13,Store_2,Item_A,41,10.8 +2024-01-13,Store_2,Item_B,33,9.0 +2024-01-13,Store_2,Item_C,29,9.0 +2024-01-13,Store_2,Item_D,17,6.3 +2024-01-13,Store_2,Item_E,14,6.3 +2024-01-13,Store_3,Item_A,26,10.8 +2024-01-13,Store_3,Item_B,28,9.0 +2024-01-13,Store_3,Item_C,25,9.0 +2024-01-13,Store_3,Item_D,12,6.3 +2024-01-13,Store_3,Item_E,17,6.3 +2024-01-14,Store_1,Item_A,56,10.8 +2024-01-14,Store_1,Item_B,36,9.0 +2024-01-14,Store_1,Item_C,45,9.0 +2024-01-14,Store_1,Item_D,31,6.3 +2024-01-14,Store_1,Item_E,23,6.3 +2024-01-14,Store_2,Item_A,37,10.8 +2024-01-14,Store_2,Item_B,32,9.0 +2024-01-14,Store_2,Item_C,35,9.0 +2024-01-14,Store_2,Item_D,24,6.3 +2024-01-14,Store_2,Item_E,14,6.3 +2024-01-14,Store_3,Item_A,22,10.8 +2024-01-14,Store_3,Item_B,25,9.0 +2024-01-14,Store_3,Item_C,25,9.0 +2024-01-14,Store_3,Item_D,18,6.3 +2024-01-14,Store_3,Item_E,28,6.3 +2024-01-15,Store_1,Item_A,75,10.8 +2024-01-15,Store_1,Item_B,69,9.0 +2024-01-15,Store_1,Item_C,67,9.0 +2024-01-15,Store_1,Item_D,44,6.3 +2024-01-15,Store_1,Item_E,36,6.3 +2024-01-15,Store_2,Item_A,51,10.8 +2024-01-15,Store_2,Item_B,31,9.0 +2024-01-15,Store_2,Item_C,35,9.0 +2024-01-15,Store_2,Item_D,23,6.3 +2024-01-15,Store_2,Item_E,26,6.3 +2024-01-15,Store_3,Item_A,52,10.8 +2024-01-15,Store_3,Item_B,18,9.0 +2024-01-15,Store_3,Item_C,34,9.0 +2024-01-15,Store_3,Item_D,14,6.3 +2024-01-15,Store_3,Item_E,19,6.3 +2024-01-16,Store_1,Item_A,39,10.8 +2024-01-16,Store_1,Item_B,27,9.0 +2024-01-16,Store_1,Item_C,21,9.0 +2024-01-16,Store_1,Item_D,16,6.3 +2024-01-16,Store_1,Item_E,21,6.3 +2024-01-16,Store_2,Item_A,18,10.8 +2024-01-16,Store_2,Item_B,18,9.0 +2024-01-16,Store_2,Item_C,18,9.0 +2024-01-16,Store_2,Item_D,10,6.3 +2024-01-16,Store_2,Item_E,18,6.3 +2024-01-16,Store_3,Item_A,19,10.8 +2024-01-16,Store_3,Item_B,8,9.0 +2024-01-16,Store_3,Item_C,14,9.0 +2024-01-16,Store_3,Item_D,8,6.3 +2024-01-16,Store_3,Item_E,11,6.3 +2024-01-17,Store_1,Item_A,26,10.8 +2024-01-17,Store_1,Item_B,25,9.0 +2024-01-17,Store_1,Item_C,28,9.0 +2024-01-17,Store_1,Item_D,21,6.3 +2024-01-17,Store_1,Item_E,13,6.3 +2024-01-17,Store_2,Item_A,19,10.8 +2024-01-17,Store_2,Item_B,15,9.0 +2024-01-17,Store_2,Item_C,15,9.0 +2024-01-17,Store_2,Item_D,16,6.3 +2024-01-17,Store_2,Item_E,13,6.3 +2024-01-17,Store_3,Item_A,12,10.8 +2024-01-17,Store_3,Item_B,16,9.0 +2024-01-17,Store_3,Item_C,19,9.0 +2024-01-17,Store_3,Item_D,11,6.3 +2024-01-17,Store_3,Item_E,6,6.3 +2024-01-18,Store_1,Item_A,27,10.8 +2024-01-18,Store_1,Item_B,31,9.0 +2024-01-18,Store_1,Item_C,21,9.0 +2024-01-18,Store_1,Item_D,19,6.3 +2024-01-18,Store_1,Item_E,20,6.3 +2024-01-18,Store_2,Item_A,16,10.8 +2024-01-18,Store_2,Item_B,16,9.0 +2024-01-18,Store_2,Item_C,5,9.0 +2024-01-18,Store_2,Item_D,9,6.3 +2024-01-18,Store_2,Item_E,11,6.3 +2024-01-18,Store_3,Item_A,12,10.8 +2024-01-18,Store_3,Item_B,17,9.0 +2024-01-18,Store_3,Item_C,9,9.0 +2024-01-18,Store_3,Item_D,8,6.3 +2024-01-18,Store_3,Item_E,9,6.3 +2024-01-19,Store_1,Item_A,37,10.8 +2024-01-19,Store_1,Item_B,17,9.0 +2024-01-19,Store_1,Item_C,29,9.0 +2024-01-19,Store_1,Item_D,16,6.3 +2024-01-19,Store_1,Item_E,13,6.3 +2024-01-19,Store_2,Item_A,21,10.8 +2024-01-19,Store_2,Item_B,16,9.0 +2024-01-19,Store_2,Item_C,14,9.0 +2024-01-19,Store_2,Item_D,11,6.3 +2024-01-19,Store_2,Item_E,10,6.3 +2024-01-19,Store_3,Item_A,15,10.8 +2024-01-19,Store_3,Item_B,14,9.0 +2024-01-19,Store_3,Item_C,17,9.0 +2024-01-19,Store_3,Item_D,6,6.3 +2024-01-19,Store_3,Item_E,12,6.3 +2024-01-20,Store_1,Item_A,25,10.8 +2024-01-20,Store_1,Item_B,34,9.0 +2024-01-20,Store_1,Item_C,39,9.0 +2024-01-20,Store_1,Item_D,26,6.3 +2024-01-20,Store_1,Item_E,21,6.3 +2024-01-20,Store_2,Item_A,27,10.8 +2024-01-20,Store_2,Item_B,21,9.0 +2024-01-20,Store_2,Item_C,20,9.0 +2024-01-20,Store_2,Item_D,19,6.3 +2024-01-20,Store_2,Item_E,17,6.3 +2024-01-20,Store_3,Item_A,19,10.8 +2024-01-20,Store_3,Item_B,22,9.0 +2024-01-20,Store_3,Item_C,20,9.0 +2024-01-20,Store_3,Item_D,15,6.3 +2024-01-20,Store_3,Item_E,14,6.3 +2024-01-21,Store_1,Item_A,19,12.0 +2024-01-21,Store_1,Item_B,17,10.0 +2024-01-21,Store_1,Item_C,22,10.0 +2024-01-21,Store_1,Item_D,15,7.0 +2024-01-21,Store_1,Item_E,13,7.0 +2024-01-21,Store_2,Item_A,15,12.0 +2024-01-21,Store_2,Item_B,16,10.0 +2024-01-21,Store_2,Item_C,11,10.0 +2024-01-21,Store_2,Item_D,9,7.0 +2024-01-21,Store_2,Item_E,8,7.0 +2024-01-21,Store_3,Item_A,11,12.0 +2024-01-21,Store_3,Item_B,12,10.0 +2024-01-21,Store_3,Item_C,12,10.0 +2024-01-21,Store_3,Item_D,8,7.0 +2024-01-21,Store_3,Item_E,9,7.0 +2024-01-22,Store_1,Item_A,15,12.0 +2024-01-22,Store_1,Item_B,14,10.0 +2024-01-22,Store_1,Item_C,11,10.0 +2024-01-22,Store_1,Item_D,9,7.0 +2024-01-22,Store_1,Item_E,8,7.0 +2024-01-22,Store_2,Item_A,10,12.0 +2024-01-22,Store_2,Item_B,9,10.0 +2024-01-22,Store_2,Item_C,7,10.0 +2024-01-22,Store_2,Item_D,8,7.0 +2024-01-22,Store_2,Item_E,4,7.0 +2024-01-22,Store_3,Item_A,6,12.0 +2024-01-22,Store_3,Item_B,8,10.0 +2024-01-22,Store_3,Item_C,7,10.0 +2024-01-22,Store_3,Item_D,5,7.0 +2024-01-22,Store_3,Item_E,5,7.0 +2024-01-23,Store_1,Item_A,15,12.0 +2024-01-23,Store_1,Item_B,10,10.0 +2024-01-23,Store_1,Item_C,13,10.0 +2024-01-23,Store_1,Item_D,7,7.0 +2024-01-23,Store_1,Item_E,10,7.0 +2024-01-23,Store_2,Item_A,9,12.0 +2024-01-23,Store_2,Item_B,7,10.0 +2024-01-23,Store_2,Item_C,7,10.0 +2024-01-23,Store_2,Item_D,6,7.0 +2024-01-23,Store_2,Item_E,5,7.0 +2024-01-23,Store_3,Item_A,6,12.0 +2024-01-23,Store_3,Item_B,7,10.0 +2024-01-23,Store_3,Item_C,7,10.0 +2024-01-23,Store_3,Item_D,4,7.0 +2024-01-23,Store_3,Item_E,4,7.0 +2024-01-24,Store_1,Item_A,16,12.0 +2024-01-24,Store_1,Item_B,9,10.0 +2024-01-24,Store_1,Item_C,9,10.0 +2024-01-24,Store_1,Item_D,7,7.0 +2024-01-24,Store_1,Item_E,8,7.0 +2024-01-24,Store_2,Item_A,11,12.0 +2024-01-24,Store_2,Item_B,11,10.0 +2024-01-24,Store_2,Item_C,10,10.0 +2024-01-24,Store_2,Item_D,5,7.0 +2024-01-24,Store_2,Item_E,6,7.0 +2024-01-24,Store_3,Item_A,6,12.0 +2024-01-24,Store_3,Item_B,6,10.0 +2024-01-24,Store_3,Item_C,6,10.0 +2024-01-24,Store_3,Item_D,5,7.0 +2024-01-24,Store_3,Item_E,4,7.0 +2024-01-25,Store_1,Item_A,17,12.0 +2024-01-25,Store_1,Item_B,17,10.0 +2024-01-25,Store_1,Item_C,13,10.0 +2024-01-25,Store_1,Item_D,10,7.0 +2024-01-25,Store_1,Item_E,10,7.0 +2024-01-25,Store_2,Item_A,9,12.0 +2024-01-25,Store_2,Item_B,9,10.0 +2024-01-25,Store_2,Item_C,8,10.0 +2024-01-25,Store_2,Item_D,6,7.0 +2024-01-25,Store_2,Item_E,5,7.0 +2024-01-25,Store_3,Item_A,8,12.0 +2024-01-25,Store_3,Item_B,7,10.0 +2024-01-25,Store_3,Item_C,9,10.0 +2024-01-25,Store_3,Item_D,5,7.0 +2024-01-25,Store_3,Item_E,7,7.0 +2024-01-26,Store_1,Item_A,13,12.0 +2024-01-26,Store_1,Item_B,16,10.0 +2024-01-26,Store_1,Item_C,14,10.0 +2024-01-26,Store_1,Item_D,13,7.0 +2024-01-26,Store_1,Item_E,8,7.0 +2024-01-26,Store_2,Item_A,9,12.0 +2024-01-26,Store_2,Item_B,8,10.0 +2024-01-26,Store_2,Item_C,5,10.0 +2024-01-26,Store_2,Item_D,5,7.0 +2024-01-26,Store_2,Item_E,5,7.0 +2024-01-26,Store_3,Item_A,9,12.0 +2024-01-26,Store_3,Item_B,7,10.0 +2024-01-26,Store_3,Item_C,10,10.0 +2024-01-26,Store_3,Item_D,6,7.0 +2024-01-26,Store_3,Item_E,4,7.0 +2024-01-27,Store_1,Item_A,21,12.0 +2024-01-27,Store_1,Item_B,23,10.0 +2024-01-27,Store_1,Item_C,15,10.0 +2024-01-27,Store_1,Item_D,20,7.0 +2024-01-27,Store_1,Item_E,18,7.0 +2024-01-27,Store_2,Item_A,15,12.0 +2024-01-27,Store_2,Item_B,9,10.0 +2024-01-27,Store_2,Item_C,18,10.0 +2024-01-27,Store_2,Item_D,9,7.0 +2024-01-27,Store_2,Item_E,12,7.0 +2024-01-27,Store_3,Item_A,9,12.0 +2024-01-27,Store_3,Item_B,10,10.0 +2024-01-27,Store_3,Item_C,11,10.0 +2024-01-27,Store_3,Item_D,8,7.0 +2024-01-27,Store_3,Item_E,7,7.0 +2024-01-28,Store_1,Item_A,30,12.0 +2024-01-28,Store_1,Item_B,17,10.0 +2024-01-28,Store_1,Item_C,21,10.0 +2024-01-28,Store_1,Item_D,16,7.0 +2024-01-28,Store_1,Item_E,17,7.0 +2024-01-28,Store_2,Item_A,20,12.0 +2024-01-28,Store_2,Item_B,11,10.0 +2024-01-28,Store_2,Item_C,10,10.0 +2024-01-28,Store_2,Item_D,13,7.0 +2024-01-28,Store_2,Item_E,11,7.0 +2024-01-28,Store_3,Item_A,12,12.0 +2024-01-28,Store_3,Item_B,15,10.0 +2024-01-28,Store_3,Item_C,12,10.0 +2024-01-28,Store_3,Item_D,10,7.0 +2024-01-28,Store_3,Item_E,8,7.0 +2024-01-29,Store_1,Item_A,26,12.0 +2024-01-29,Store_1,Item_B,21,10.0 +2024-01-29,Store_1,Item_C,14,10.0 +2024-01-29,Store_1,Item_D,13,7.0 +2024-01-29,Store_1,Item_E,12,7.0 +2024-01-29,Store_2,Item_A,15,12.0 +2024-01-29,Store_2,Item_B,8,10.0 +2024-01-29,Store_2,Item_C,11,10.0 +2024-01-29,Store_2,Item_D,8,7.0 +2024-01-29,Store_2,Item_E,4,7.0 +2024-01-29,Store_3,Item_A,7,12.0 +2024-01-29,Store_3,Item_B,4,10.0 +2024-01-29,Store_3,Item_C,7,10.0 +2024-01-29,Store_3,Item_D,6,7.0 +2024-01-29,Store_3,Item_E,7,7.0 +2024-01-30,Store_1,Item_A,19,12.0 +2024-01-30,Store_1,Item_B,21,10.0 +2024-01-30,Store_1,Item_C,11,10.0 +2024-01-30,Store_1,Item_D,7,7.0 +2024-01-30,Store_1,Item_E,11,7.0 +2024-01-30,Store_2,Item_A,14,12.0 +2024-01-30,Store_2,Item_B,10,10.0 +2024-01-30,Store_2,Item_C,6,10.0 +2024-01-30,Store_2,Item_D,7,7.0 +2024-01-30,Store_2,Item_E,5,7.0 +2024-01-30,Store_3,Item_A,11,12.0 +2024-01-30,Store_3,Item_B,9,10.0 +2024-01-30,Store_3,Item_C,7,10.0 +2024-01-30,Store_3,Item_D,5,7.0 +2024-01-30,Store_3,Item_E,4,7.0 +2024-01-31,Store_1,Item_A,20,12.0 +2024-01-31,Store_1,Item_B,20,10.0 +2024-01-31,Store_1,Item_C,13,10.0 +2024-01-31,Store_1,Item_D,13,7.0 +2024-01-31,Store_1,Item_E,10,7.0 +2024-01-31,Store_2,Item_A,11,12.0 +2024-01-31,Store_2,Item_B,11,10.0 +2024-01-31,Store_2,Item_C,9,10.0 +2024-01-31,Store_2,Item_D,7,7.0 +2024-01-31,Store_2,Item_E,6,7.0 +2024-01-31,Store_3,Item_A,15,12.0 +2024-01-31,Store_3,Item_B,9,10.0 +2024-01-31,Store_3,Item_C,7,10.0 +2024-01-31,Store_3,Item_D,6,7.0 +2024-01-31,Store_3,Item_E,6,7.0 +2024-02-01,Store_1,Item_A,39,12.0 +2024-02-01,Store_1,Item_B,38,10.0 +2024-02-01,Store_1,Item_C,39,10.0 +2024-02-01,Store_1,Item_D,21,7.0 +2024-02-01,Store_1,Item_E,21,7.0 +2024-02-01,Store_2,Item_A,25,12.0 +2024-02-01,Store_2,Item_B,12,10.0 +2024-02-01,Store_2,Item_C,15,10.0 +2024-02-01,Store_2,Item_D,20,7.0 +2024-02-01,Store_2,Item_E,21,7.0 +2024-02-01,Store_3,Item_A,20,12.0 +2024-02-01,Store_3,Item_B,20,10.0 +2024-02-01,Store_3,Item_C,19,10.0 +2024-02-01,Store_3,Item_D,20,7.0 +2024-02-01,Store_3,Item_E,15,7.0 +2024-02-02,Store_1,Item_A,20,12.0 +2024-02-02,Store_1,Item_B,14,10.0 +2024-02-02,Store_1,Item_C,12,10.0 +2024-02-02,Store_1,Item_D,12,7.0 +2024-02-02,Store_1,Item_E,10,7.0 +2024-02-02,Store_2,Item_A,10,12.0 +2024-02-02,Store_2,Item_B,10,10.0 +2024-02-02,Store_2,Item_C,9,10.0 +2024-02-02,Store_2,Item_D,11,7.0 +2024-02-02,Store_2,Item_E,9,7.0 +2024-02-02,Store_3,Item_A,11,12.0 +2024-02-02,Store_3,Item_B,12,10.0 +2024-02-02,Store_3,Item_C,9,10.0 +2024-02-02,Store_3,Item_D,5,7.0 +2024-02-02,Store_3,Item_E,8,7.0 +2024-02-03,Store_1,Item_A,36,12.0 +2024-02-03,Store_1,Item_B,21,10.0 +2024-02-03,Store_1,Item_C,26,10.0 +2024-02-03,Store_1,Item_D,15,7.0 +2024-02-03,Store_1,Item_E,13,7.0 +2024-02-03,Store_2,Item_A,26,12.0 +2024-02-03,Store_2,Item_B,25,10.0 +2024-02-03,Store_2,Item_C,13,10.0 +2024-02-03,Store_2,Item_D,14,7.0 +2024-02-03,Store_2,Item_E,11,7.0 +2024-02-03,Store_3,Item_A,15,12.0 +2024-02-03,Store_3,Item_B,12,10.0 +2024-02-03,Store_3,Item_C,12,10.0 +2024-02-03,Store_3,Item_D,10,7.0 +2024-02-03,Store_3,Item_E,8,7.0 +2024-02-04,Store_1,Item_A,36,12.0 +2024-02-04,Store_1,Item_B,28,10.0 +2024-02-04,Store_1,Item_C,27,10.0 +2024-02-04,Store_1,Item_D,16,7.0 +2024-02-04,Store_1,Item_E,17,7.0 +2024-02-04,Store_2,Item_A,26,12.0 +2024-02-04,Store_2,Item_B,20,10.0 +2024-02-04,Store_2,Item_C,15,10.0 +2024-02-04,Store_2,Item_D,13,7.0 +2024-02-04,Store_2,Item_E,15,7.0 +2024-02-04,Store_3,Item_A,12,12.0 +2024-02-04,Store_3,Item_B,16,10.0 +2024-02-04,Store_3,Item_C,13,10.0 +2024-02-04,Store_3,Item_D,11,7.0 +2024-02-04,Store_3,Item_E,9,7.0 +2024-02-05,Store_1,Item_A,14,12.0 +2024-02-05,Store_1,Item_B,13,10.0 +2024-02-05,Store_1,Item_C,19,10.0 +2024-02-05,Store_1,Item_D,14,7.0 +2024-02-05,Store_1,Item_E,11,7.0 +2024-02-05,Store_2,Item_A,17,12.0 +2024-02-05,Store_2,Item_B,8,10.0 +2024-02-05,Store_2,Item_C,12,10.0 +2024-02-05,Store_2,Item_D,6,7.0 +2024-02-05,Store_2,Item_E,7,7.0 +2024-02-05,Store_3,Item_A,12,12.0 +2024-02-05,Store_3,Item_B,8,10.0 +2024-02-05,Store_3,Item_C,9,10.0 +2024-02-05,Store_3,Item_D,8,7.0 +2024-02-05,Store_3,Item_E,6,7.0 +2024-02-06,Store_1,Item_A,27,12.0 +2024-02-06,Store_1,Item_B,15,10.0 +2024-02-06,Store_1,Item_C,21,10.0 +2024-02-06,Store_1,Item_D,17,7.0 +2024-02-06,Store_1,Item_E,7,7.0 +2024-02-06,Store_2,Item_A,13,12.0 +2024-02-06,Store_2,Item_B,14,10.0 +2024-02-06,Store_2,Item_C,12,10.0 +2024-02-06,Store_2,Item_D,9,7.0 +2024-02-06,Store_2,Item_E,8,7.0 +2024-02-06,Store_3,Item_A,12,12.0 +2024-02-06,Store_3,Item_B,10,10.0 +2024-02-06,Store_3,Item_C,13,10.0 +2024-02-06,Store_3,Item_D,7,7.0 +2024-02-06,Store_3,Item_E,7,7.0 +2024-02-07,Store_1,Item_A,22,12.0 +2024-02-07,Store_1,Item_B,18,10.0 +2024-02-07,Store_1,Item_C,18,10.0 +2024-02-07,Store_1,Item_D,15,7.0 +2024-02-07,Store_1,Item_E,12,7.0 +2024-02-07,Store_2,Item_A,16,12.0 +2024-02-07,Store_2,Item_B,18,10.0 +2024-02-07,Store_2,Item_C,15,10.0 +2024-02-07,Store_2,Item_D,8,7.0 +2024-02-07,Store_2,Item_E,11,7.0 +2024-02-07,Store_3,Item_A,11,12.0 +2024-02-07,Store_3,Item_B,6,10.0 +2024-02-07,Store_3,Item_C,8,10.0 +2024-02-07,Store_3,Item_D,4,7.0 +2024-02-07,Store_3,Item_E,6,7.0 +2024-02-08,Store_1,Item_A,24,12.0 +2024-02-08,Store_1,Item_B,26,10.0 +2024-02-08,Store_1,Item_C,21,10.0 +2024-02-08,Store_1,Item_D,13,7.0 +2024-02-08,Store_1,Item_E,16,7.0 +2024-02-08,Store_2,Item_A,8,12.0 +2024-02-08,Store_2,Item_B,13,10.0 +2024-02-08,Store_2,Item_C,15,10.0 +2024-02-08,Store_2,Item_D,6,7.0 +2024-02-08,Store_2,Item_E,11,7.0 +2024-02-08,Store_3,Item_A,13,12.0 +2024-02-08,Store_3,Item_B,9,10.0 +2024-02-08,Store_3,Item_C,12,10.0 +2024-02-08,Store_3,Item_D,10,7.0 +2024-02-08,Store_3,Item_E,7,7.0 +2024-02-09,Store_1,Item_A,25,12.0 +2024-02-09,Store_1,Item_B,18,10.0 +2024-02-09,Store_1,Item_C,16,10.0 +2024-02-09,Store_1,Item_D,16,7.0 +2024-02-09,Store_1,Item_E,11,7.0 +2024-02-09,Store_2,Item_A,16,12.0 +2024-02-09,Store_2,Item_B,12,10.0 +2024-02-09,Store_2,Item_C,14,10.0 +2024-02-09,Store_2,Item_D,9,7.0 +2024-02-09,Store_2,Item_E,11,7.0 +2024-02-09,Store_3,Item_A,11,12.0 +2024-02-09,Store_3,Item_B,10,10.0 +2024-02-09,Store_3,Item_C,8,10.0 +2024-02-09,Store_3,Item_D,6,7.0 +2024-02-09,Store_3,Item_E,8,7.0 +2024-02-10,Store_1,Item_A,73,10.8 +2024-02-10,Store_1,Item_B,43,9.0 +2024-02-10,Store_1,Item_C,62,9.0 +2024-02-10,Store_1,Item_D,47,6.3 +2024-02-10,Store_1,Item_E,40,6.3 +2024-02-10,Store_2,Item_A,58,10.8 +2024-02-10,Store_2,Item_B,29,9.0 +2024-02-10,Store_2,Item_C,26,9.0 +2024-02-10,Store_2,Item_D,15,6.3 +2024-02-10,Store_2,Item_E,32,6.3 +2024-02-10,Store_3,Item_A,38,10.8 +2024-02-10,Store_3,Item_B,28,9.0 +2024-02-10,Store_3,Item_C,29,9.0 +2024-02-10,Store_3,Item_D,15,6.3 +2024-02-10,Store_3,Item_E,29,6.3 +2024-02-11,Store_1,Item_A,64,10.8 +2024-02-11,Store_1,Item_B,53,9.0 +2024-02-11,Store_1,Item_C,59,9.0 +2024-02-11,Store_1,Item_D,40,6.3 +2024-02-11,Store_1,Item_E,38,6.3 +2024-02-11,Store_2,Item_A,35,10.8 +2024-02-11,Store_2,Item_B,38,9.0 +2024-02-11,Store_2,Item_C,47,9.0 +2024-02-11,Store_2,Item_D,30,6.3 +2024-02-11,Store_2,Item_E,32,6.3 +2024-02-11,Store_3,Item_A,30,10.8 +2024-02-11,Store_3,Item_B,22,9.0 +2024-02-11,Store_3,Item_C,27,9.0 +2024-02-11,Store_3,Item_D,19,6.3 +2024-02-11,Store_3,Item_E,23,6.3 +2024-02-12,Store_1,Item_A,26,10.8 +2024-02-12,Store_1,Item_B,44,9.0 +2024-02-12,Store_1,Item_C,32,9.0 +2024-02-12,Store_1,Item_D,21,6.3 +2024-02-12,Store_1,Item_E,18,6.3 +2024-02-12,Store_2,Item_A,18,10.8 +2024-02-12,Store_2,Item_B,26,9.0 +2024-02-12,Store_2,Item_C,22,9.0 +2024-02-12,Store_2,Item_D,11,6.3 +2024-02-12,Store_2,Item_E,11,6.3 +2024-02-12,Store_3,Item_A,20,10.8 +2024-02-12,Store_3,Item_B,24,9.0 +2024-02-12,Store_3,Item_C,17,9.0 +2024-02-12,Store_3,Item_D,8,6.3 +2024-02-12,Store_3,Item_E,12,6.3 +2024-02-13,Store_1,Item_A,37,10.8 +2024-02-13,Store_1,Item_B,15,9.0 +2024-02-13,Store_1,Item_C,32,9.0 +2024-02-13,Store_1,Item_D,22,6.3 +2024-02-13,Store_1,Item_E,26,6.3 +2024-02-13,Store_2,Item_A,36,10.8 +2024-02-13,Store_2,Item_B,26,9.0 +2024-02-13,Store_2,Item_C,20,9.0 +2024-02-13,Store_2,Item_D,11,6.3 +2024-02-13,Store_2,Item_E,23,6.3 +2024-02-13,Store_3,Item_A,21,10.8 +2024-02-13,Store_3,Item_B,17,9.0 +2024-02-13,Store_3,Item_C,17,9.0 +2024-02-13,Store_3,Item_D,12,6.3 +2024-02-13,Store_3,Item_E,11,6.3 +2024-02-14,Store_1,Item_A,33,10.8 +2024-02-14,Store_1,Item_B,28,9.0 +2024-02-14,Store_1,Item_C,31,9.0 +2024-02-14,Store_1,Item_D,19,6.3 +2024-02-14,Store_1,Item_E,19,6.3 +2024-02-14,Store_2,Item_A,26,10.8 +2024-02-14,Store_2,Item_B,20,9.0 +2024-02-14,Store_2,Item_C,27,9.0 +2024-02-14,Store_2,Item_D,6,6.3 +2024-02-14,Store_2,Item_E,18,6.3 +2024-02-14,Store_3,Item_A,25,10.8 +2024-02-14,Store_3,Item_B,9,9.0 +2024-02-14,Store_3,Item_C,15,9.0 +2024-02-14,Store_3,Item_D,11,6.3 +2024-02-14,Store_3,Item_E,8,6.3 +2024-02-15,Store_1,Item_A,62,10.8 +2024-02-15,Store_1,Item_B,47,9.0 +2024-02-15,Store_1,Item_C,82,9.0 +2024-02-15,Store_1,Item_D,50,6.3 +2024-02-15,Store_1,Item_E,53,6.3 +2024-02-15,Store_2,Item_A,56,10.8 +2024-02-15,Store_2,Item_B,31,9.0 +2024-02-15,Store_2,Item_C,36,9.0 +2024-02-15,Store_2,Item_D,31,6.3 +2024-02-15,Store_2,Item_E,21,6.3 +2024-02-15,Store_3,Item_A,44,10.8 +2024-02-15,Store_3,Item_B,31,9.0 +2024-02-15,Store_3,Item_C,30,9.0 +2024-02-15,Store_3,Item_D,26,6.3 +2024-02-15,Store_3,Item_E,24,6.3 +2024-02-16,Store_1,Item_A,32,10.8 +2024-02-16,Store_1,Item_B,36,9.0 +2024-02-16,Store_1,Item_C,23,9.0 +2024-02-16,Store_1,Item_D,23,6.3 +2024-02-16,Store_1,Item_E,23,6.3 +2024-02-16,Store_2,Item_A,22,10.8 +2024-02-16,Store_2,Item_B,20,9.0 +2024-02-16,Store_2,Item_C,14,9.0 +2024-02-16,Store_2,Item_D,16,6.3 +2024-02-16,Store_2,Item_E,13,6.3 +2024-02-16,Store_3,Item_A,16,10.8 +2024-02-16,Store_3,Item_B,19,9.0 +2024-02-16,Store_3,Item_C,13,9.0 +2024-02-16,Store_3,Item_D,7,6.3 +2024-02-16,Store_3,Item_E,7,6.3 +2024-02-17,Store_1,Item_A,57,10.8 +2024-02-17,Store_1,Item_B,31,9.0 +2024-02-17,Store_1,Item_C,57,9.0 +2024-02-17,Store_1,Item_D,17,6.3 +2024-02-17,Store_1,Item_E,39,6.3 +2024-02-17,Store_2,Item_A,35,10.8 +2024-02-17,Store_2,Item_B,27,9.0 +2024-02-17,Store_2,Item_C,25,9.0 +2024-02-17,Store_2,Item_D,21,6.3 +2024-02-17,Store_2,Item_E,19,6.3 +2024-02-17,Store_3,Item_A,33,10.8 +2024-02-17,Store_3,Item_B,23,9.0 +2024-02-17,Store_3,Item_C,23,9.0 +2024-02-17,Store_3,Item_D,14,6.3 +2024-02-17,Store_3,Item_E,15,6.3 +2024-02-18,Store_1,Item_A,52,10.8 +2024-02-18,Store_1,Item_B,26,9.0 +2024-02-18,Store_1,Item_C,29,9.0 +2024-02-18,Store_1,Item_D,32,6.3 +2024-02-18,Store_1,Item_E,29,6.3 +2024-02-18,Store_2,Item_A,31,10.8 +2024-02-18,Store_2,Item_B,27,9.0 +2024-02-18,Store_2,Item_C,29,9.0 +2024-02-18,Store_2,Item_D,17,6.3 +2024-02-18,Store_2,Item_E,16,6.3 +2024-02-18,Store_3,Item_A,27,10.8 +2024-02-18,Store_3,Item_B,17,9.0 +2024-02-18,Store_3,Item_C,23,9.0 +2024-02-18,Store_3,Item_D,10,6.3 +2024-02-18,Store_3,Item_E,18,6.3 +2024-02-19,Store_1,Item_A,34,10.8 +2024-02-19,Store_1,Item_B,27,9.0 +2024-02-19,Store_1,Item_C,31,9.0 +2024-02-19,Store_1,Item_D,24,6.3 +2024-02-19,Store_1,Item_E,22,6.3 +2024-02-19,Store_2,Item_A,13,10.8 +2024-02-19,Store_2,Item_B,13,9.0 +2024-02-19,Store_2,Item_C,15,9.0 +2024-02-19,Store_2,Item_D,12,6.3 +2024-02-19,Store_2,Item_E,13,6.3 +2024-02-19,Store_3,Item_A,14,10.8 +2024-02-19,Store_3,Item_B,14,9.0 +2024-02-19,Store_3,Item_C,11,9.0 +2024-02-19,Store_3,Item_D,8,6.3 +2024-02-19,Store_3,Item_E,7,6.3 +2024-02-20,Store_1,Item_A,25,10.8 +2024-02-20,Store_1,Item_B,18,9.0 +2024-02-20,Store_1,Item_C,20,9.0 +2024-02-20,Store_1,Item_D,21,6.3 +2024-02-20,Store_1,Item_E,14,6.3 +2024-02-20,Store_2,Item_A,31,10.8 +2024-02-20,Store_2,Item_B,18,9.0 +2024-02-20,Store_2,Item_C,17,9.0 +2024-02-20,Store_2,Item_D,9,6.3 +2024-02-20,Store_2,Item_E,13,6.3 +2024-02-20,Store_3,Item_A,14,10.8 +2024-02-20,Store_3,Item_B,14,9.0 +2024-02-20,Store_3,Item_C,20,9.0 +2024-02-20,Store_3,Item_D,9,6.3 +2024-02-20,Store_3,Item_E,11,6.3 +2024-02-21,Store_1,Item_A,14,12.0 +2024-02-21,Store_1,Item_B,13,10.0 +2024-02-21,Store_1,Item_C,18,10.0 +2024-02-21,Store_1,Item_D,8,7.0 +2024-02-21,Store_1,Item_E,13,7.0 +2024-02-21,Store_2,Item_A,12,12.0 +2024-02-21,Store_2,Item_B,8,10.0 +2024-02-21,Store_2,Item_C,10,10.0 +2024-02-21,Store_2,Item_D,7,7.0 +2024-02-21,Store_2,Item_E,7,7.0 +2024-02-21,Store_3,Item_A,6,12.0 +2024-02-21,Store_3,Item_B,6,10.0 +2024-02-21,Store_3,Item_C,7,10.0 +2024-02-21,Store_3,Item_D,5,7.0 +2024-02-21,Store_3,Item_E,5,7.0 +2024-02-22,Store_1,Item_A,17,12.0 +2024-02-22,Store_1,Item_B,10,10.0 +2024-02-22,Store_1,Item_C,14,10.0 +2024-02-22,Store_1,Item_D,10,7.0 +2024-02-22,Store_1,Item_E,10,7.0 +2024-02-22,Store_2,Item_A,13,12.0 +2024-02-22,Store_2,Item_B,10,10.0 +2024-02-22,Store_2,Item_C,10,10.0 +2024-02-22,Store_2,Item_D,6,7.0 +2024-02-22,Store_2,Item_E,4,7.0 +2024-02-22,Store_3,Item_A,9,12.0 +2024-02-22,Store_3,Item_B,7,10.0 +2024-02-22,Store_3,Item_C,7,10.0 +2024-02-22,Store_3,Item_D,3,7.0 +2024-02-22,Store_3,Item_E,4,7.0 +2024-02-23,Store_1,Item_A,20,12.0 +2024-02-23,Store_1,Item_B,13,10.0 +2024-02-23,Store_1,Item_C,15,10.0 +2024-02-23,Store_1,Item_D,9,7.0 +2024-02-23,Store_1,Item_E,9,7.0 +2024-02-23,Store_2,Item_A,13,12.0 +2024-02-23,Store_2,Item_B,8,10.0 +2024-02-23,Store_2,Item_C,9,10.0 +2024-02-23,Store_2,Item_D,5,7.0 +2024-02-23,Store_2,Item_E,5,7.0 +2024-02-23,Store_3,Item_A,8,12.0 +2024-02-23,Store_3,Item_B,7,10.0 +2024-02-23,Store_3,Item_C,6,10.0 +2024-02-23,Store_3,Item_D,6,7.0 +2024-02-23,Store_3,Item_E,4,7.0 +2024-02-24,Store_1,Item_A,24,12.0 +2024-02-24,Store_1,Item_B,19,10.0 +2024-02-24,Store_1,Item_C,27,10.0 +2024-02-24,Store_1,Item_D,15,7.0 +2024-02-24,Store_1,Item_E,17,7.0 +2024-02-24,Store_2,Item_A,11,12.0 +2024-02-24,Store_2,Item_B,14,10.0 +2024-02-24,Store_2,Item_C,16,10.0 +2024-02-24,Store_2,Item_D,10,7.0 +2024-02-24,Store_2,Item_E,11,7.0 +2024-02-24,Store_3,Item_A,12,12.0 +2024-02-24,Store_3,Item_B,14,10.0 +2024-02-24,Store_3,Item_C,16,10.0 +2024-02-24,Store_3,Item_D,7,7.0 +2024-02-24,Store_3,Item_E,7,7.0 +2024-02-25,Store_1,Item_A,33,12.0 +2024-02-25,Store_1,Item_B,28,10.0 +2024-02-25,Store_1,Item_C,19,10.0 +2024-02-25,Store_1,Item_D,13,7.0 +2024-02-25,Store_1,Item_E,14,7.0 +2024-02-25,Store_2,Item_A,12,12.0 +2024-02-25,Store_2,Item_B,11,10.0 +2024-02-25,Store_2,Item_C,11,10.0 +2024-02-25,Store_2,Item_D,8,7.0 +2024-02-25,Store_2,Item_E,10,7.0 +2024-02-25,Store_3,Item_A,14,12.0 +2024-02-25,Store_3,Item_B,15,10.0 +2024-02-25,Store_3,Item_C,9,10.0 +2024-02-25,Store_3,Item_D,9,7.0 +2024-02-25,Store_3,Item_E,7,7.0 +2024-02-26,Store_1,Item_A,17,12.0 +2024-02-26,Store_1,Item_B,16,10.0 +2024-02-26,Store_1,Item_C,11,10.0 +2024-02-26,Store_1,Item_D,11,7.0 +2024-02-26,Store_1,Item_E,10,7.0 +2024-02-26,Store_2,Item_A,13,12.0 +2024-02-26,Store_2,Item_B,10,10.0 +2024-02-26,Store_2,Item_C,14,10.0 +2024-02-26,Store_2,Item_D,6,7.0 +2024-02-26,Store_2,Item_E,6,7.0 +2024-02-26,Store_3,Item_A,8,12.0 +2024-02-26,Store_3,Item_B,7,10.0 +2024-02-26,Store_3,Item_C,6,10.0 +2024-02-26,Store_3,Item_D,6,7.0 +2024-02-26,Store_3,Item_E,7,7.0 +2024-02-27,Store_1,Item_A,16,12.0 +2024-02-27,Store_1,Item_B,12,10.0 +2024-02-27,Store_1,Item_C,16,10.0 +2024-02-27,Store_1,Item_D,9,7.0 +2024-02-27,Store_1,Item_E,12,7.0 +2024-02-27,Store_2,Item_A,12,12.0 +2024-02-27,Store_2,Item_B,7,10.0 +2024-02-27,Store_2,Item_C,13,10.0 +2024-02-27,Store_2,Item_D,9,7.0 +2024-02-27,Store_2,Item_E,6,7.0 +2024-02-27,Store_3,Item_A,9,12.0 +2024-02-27,Store_3,Item_B,8,10.0 +2024-02-27,Store_3,Item_C,8,10.0 +2024-02-27,Store_3,Item_D,6,7.0 +2024-02-27,Store_3,Item_E,8,7.0 +2024-02-28,Store_1,Item_A,18,12.0 +2024-02-28,Store_1,Item_B,13,10.0 +2024-02-28,Store_1,Item_C,11,10.0 +2024-02-28,Store_1,Item_D,9,7.0 +2024-02-28,Store_1,Item_E,11,7.0 +2024-02-28,Store_2,Item_A,17,12.0 +2024-02-28,Store_2,Item_B,9,10.0 +2024-02-28,Store_2,Item_C,11,10.0 +2024-02-28,Store_2,Item_D,7,7.0 +2024-02-28,Store_2,Item_E,9,7.0 +2024-02-28,Store_3,Item_A,15,12.0 +2024-02-28,Store_3,Item_B,7,10.0 +2024-02-28,Store_3,Item_C,7,10.0 +2024-02-28,Store_3,Item_D,7,7.0 +2024-02-28,Store_3,Item_E,6,7.0 +2024-02-29,Store_1,Item_A,27,12.0 +2024-02-29,Store_1,Item_B,18,10.0 +2024-02-29,Store_1,Item_C,15,10.0 +2024-02-29,Store_1,Item_D,13,7.0 +2024-02-29,Store_1,Item_E,14,7.0 +2024-02-29,Store_2,Item_A,15,12.0 +2024-02-29,Store_2,Item_B,12,10.0 +2024-02-29,Store_2,Item_C,13,10.0 +2024-02-29,Store_2,Item_D,9,7.0 +2024-02-29,Store_2,Item_E,10,7.0 +2024-02-29,Store_3,Item_A,12,12.0 +2024-02-29,Store_3,Item_B,8,10.0 +2024-02-29,Store_3,Item_C,9,10.0 +2024-02-29,Store_3,Item_D,7,7.0 +2024-02-29,Store_3,Item_E,5,7.0 +2024-03-01,Store_1,Item_A,47,12.0 +2024-03-01,Store_1,Item_B,33,10.0 +2024-03-01,Store_1,Item_C,37,10.0 +2024-03-01,Store_1,Item_D,32,7.0 +2024-03-01,Store_1,Item_E,26,7.0 +2024-03-01,Store_2,Item_A,29,12.0 +2024-03-01,Store_2,Item_B,17,10.0 +2024-03-01,Store_2,Item_C,28,10.0 +2024-03-01,Store_2,Item_D,19,7.0 +2024-03-01,Store_2,Item_E,24,7.0 +2024-03-01,Store_3,Item_A,22,12.0 +2024-03-01,Store_3,Item_B,20,10.0 +2024-03-01,Store_3,Item_C,20,10.0 +2024-03-01,Store_3,Item_D,18,7.0 +2024-03-01,Store_3,Item_E,13,7.0 +2024-03-02,Store_1,Item_A,36,12.0 +2024-03-02,Store_1,Item_B,32,10.0 +2024-03-02,Store_1,Item_C,29,10.0 +2024-03-02,Store_1,Item_D,18,7.0 +2024-03-02,Store_1,Item_E,20,7.0 +2024-03-02,Store_2,Item_A,27,12.0 +2024-03-02,Store_2,Item_B,15,10.0 +2024-03-02,Store_2,Item_C,19,10.0 +2024-03-02,Store_2,Item_D,11,7.0 +2024-03-02,Store_2,Item_E,16,7.0 +2024-03-02,Store_3,Item_A,12,12.0 +2024-03-02,Store_3,Item_B,13,10.0 +2024-03-02,Store_3,Item_C,16,10.0 +2024-03-02,Store_3,Item_D,14,7.0 +2024-03-02,Store_3,Item_E,10,7.0 +2024-03-03,Store_1,Item_A,31,12.0 +2024-03-03,Store_1,Item_B,41,10.0 +2024-03-03,Store_1,Item_C,21,10.0 +2024-03-03,Store_1,Item_D,11,7.0 +2024-03-03,Store_1,Item_E,22,7.0 +2024-03-03,Store_2,Item_A,21,12.0 +2024-03-03,Store_2,Item_B,15,10.0 +2024-03-03,Store_2,Item_C,22,10.0 +2024-03-03,Store_2,Item_D,14,7.0 +2024-03-03,Store_2,Item_E,12,7.0 +2024-03-03,Store_3,Item_A,14,12.0 +2024-03-03,Store_3,Item_B,18,10.0 +2024-03-03,Store_3,Item_C,17,10.0 +2024-03-03,Store_3,Item_D,10,7.0 +2024-03-03,Store_3,Item_E,15,7.0 +2024-03-04,Store_1,Item_A,19,12.0 +2024-03-04,Store_1,Item_B,14,10.0 +2024-03-04,Store_1,Item_C,17,10.0 +2024-03-04,Store_1,Item_D,14,7.0 +2024-03-04,Store_1,Item_E,15,7.0 +2024-03-04,Store_2,Item_A,15,12.0 +2024-03-04,Store_2,Item_B,14,10.0 +2024-03-04,Store_2,Item_C,10,10.0 +2024-03-04,Store_2,Item_D,12,7.0 +2024-03-04,Store_2,Item_E,9,7.0 +2024-03-04,Store_3,Item_A,16,12.0 +2024-03-04,Store_3,Item_B,11,10.0 +2024-03-04,Store_3,Item_C,11,10.0 +2024-03-04,Store_3,Item_D,8,7.0 +2024-03-04,Store_3,Item_E,8,7.0 +2024-03-05,Store_1,Item_A,28,12.0 +2024-03-05,Store_1,Item_B,26,10.0 +2024-03-05,Store_1,Item_C,21,10.0 +2024-03-05,Store_1,Item_D,16,7.0 +2024-03-05,Store_1,Item_E,14,7.0 +2024-03-05,Store_2,Item_A,21,12.0 +2024-03-05,Store_2,Item_B,12,10.0 +2024-03-05,Store_2,Item_C,19,10.0 +2024-03-05,Store_2,Item_D,9,7.0 +2024-03-05,Store_2,Item_E,6,7.0 +2024-03-05,Store_3,Item_A,13,12.0 +2024-03-05,Store_3,Item_B,9,10.0 +2024-03-05,Store_3,Item_C,13,10.0 +2024-03-05,Store_3,Item_D,6,7.0 +2024-03-05,Store_3,Item_E,7,7.0 +2024-03-06,Store_1,Item_A,15,12.0 +2024-03-06,Store_1,Item_B,19,10.0 +2024-03-06,Store_1,Item_C,10,10.0 +2024-03-06,Store_1,Item_D,10,7.0 +2024-03-06,Store_1,Item_E,17,7.0 +2024-03-06,Store_2,Item_A,19,12.0 +2024-03-06,Store_2,Item_B,15,10.0 +2024-03-06,Store_2,Item_C,11,10.0 +2024-03-06,Store_2,Item_D,9,7.0 +2024-03-06,Store_2,Item_E,9,7.0 +2024-03-06,Store_3,Item_A,10,12.0 +2024-03-06,Store_3,Item_B,14,10.0 +2024-03-06,Store_3,Item_C,13,10.0 +2024-03-06,Store_3,Item_D,7,7.0 +2024-03-06,Store_3,Item_E,8,7.0 +2024-03-07,Store_1,Item_A,26,12.0 +2024-03-07,Store_1,Item_B,12,10.0 +2024-03-07,Store_1,Item_C,20,10.0 +2024-03-07,Store_1,Item_D,13,7.0 +2024-03-07,Store_1,Item_E,12,7.0 +2024-03-07,Store_2,Item_A,16,12.0 +2024-03-07,Store_2,Item_B,19,10.0 +2024-03-07,Store_2,Item_C,16,10.0 +2024-03-07,Store_2,Item_D,8,7.0 +2024-03-07,Store_2,Item_E,11,7.0 +2024-03-07,Store_3,Item_A,17,12.0 +2024-03-07,Store_3,Item_B,13,10.0 +2024-03-07,Store_3,Item_C,11,10.0 +2024-03-07,Store_3,Item_D,7,7.0 +2024-03-07,Store_3,Item_E,9,7.0 +2024-03-08,Store_1,Item_A,27,12.0 +2024-03-08,Store_1,Item_B,25,10.0 +2024-03-08,Store_1,Item_C,24,10.0 +2024-03-08,Store_1,Item_D,18,7.0 +2024-03-08,Store_1,Item_E,13,7.0 +2024-03-08,Store_2,Item_A,21,12.0 +2024-03-08,Store_2,Item_B,14,10.0 +2024-03-08,Store_2,Item_C,20,10.0 +2024-03-08,Store_2,Item_D,8,7.0 +2024-03-08,Store_2,Item_E,13,7.0 +2024-03-08,Store_3,Item_A,14,12.0 +2024-03-08,Store_3,Item_B,10,10.0 +2024-03-08,Store_3,Item_C,10,10.0 +2024-03-08,Store_3,Item_D,7,7.0 +2024-03-08,Store_3,Item_E,8,7.0 +2024-03-09,Store_1,Item_A,42,12.0 +2024-03-09,Store_1,Item_B,28,10.0 +2024-03-09,Store_1,Item_C,32,10.0 +2024-03-09,Store_1,Item_D,32,7.0 +2024-03-09,Store_1,Item_E,30,7.0 +2024-03-09,Store_2,Item_A,28,12.0 +2024-03-09,Store_2,Item_B,21,10.0 +2024-03-09,Store_2,Item_C,22,10.0 +2024-03-09,Store_2,Item_D,16,7.0 +2024-03-09,Store_2,Item_E,11,7.0 +2024-03-09,Store_3,Item_A,19,12.0 +2024-03-09,Store_3,Item_B,11,10.0 +2024-03-09,Store_3,Item_C,18,10.0 +2024-03-09,Store_3,Item_D,13,7.0 +2024-03-09,Store_3,Item_E,10,7.0 +2024-03-10,Store_1,Item_A,90,10.8 +2024-03-10,Store_1,Item_B,43,9.0 +2024-03-10,Store_1,Item_C,40,9.0 +2024-03-10,Store_1,Item_D,41,6.3 +2024-03-10,Store_1,Item_E,48,6.3 +2024-03-10,Store_2,Item_A,61,10.8 +2024-03-10,Store_2,Item_B,34,9.0 +2024-03-10,Store_2,Item_C,46,9.0 +2024-03-10,Store_2,Item_D,26,6.3 +2024-03-10,Store_2,Item_E,25,6.3 +2024-03-10,Store_3,Item_A,43,10.8 +2024-03-10,Store_3,Item_B,34,9.0 +2024-03-10,Store_3,Item_C,20,9.0 +2024-03-10,Store_3,Item_D,27,6.3 +2024-03-10,Store_3,Item_E,27,6.3 +2024-03-11,Store_1,Item_A,39,10.8 +2024-03-11,Store_1,Item_B,40,9.0 +2024-03-11,Store_1,Item_C,41,9.0 +2024-03-11,Store_1,Item_D,27,6.3 +2024-03-11,Store_1,Item_E,23,6.3 +2024-03-11,Store_2,Item_A,25,10.8 +2024-03-11,Store_2,Item_B,24,9.0 +2024-03-11,Store_2,Item_C,30,9.0 +2024-03-11,Store_2,Item_D,19,6.3 +2024-03-11,Store_2,Item_E,16,6.3 +2024-03-11,Store_3,Item_A,27,10.8 +2024-03-11,Store_3,Item_B,8,9.0 +2024-03-11,Store_3,Item_C,24,9.0 +2024-03-11,Store_3,Item_D,9,6.3 +2024-03-11,Store_3,Item_E,12,6.3 +2024-03-12,Store_1,Item_A,34,10.8 +2024-03-12,Store_1,Item_B,27,9.0 +2024-03-12,Store_1,Item_C,45,9.0 +2024-03-12,Store_1,Item_D,23,6.3 +2024-03-12,Store_1,Item_E,27,6.3 +2024-03-12,Store_2,Item_A,29,10.8 +2024-03-12,Store_2,Item_B,26,9.0 +2024-03-12,Store_2,Item_C,24,9.0 +2024-03-12,Store_2,Item_D,12,6.3 +2024-03-12,Store_2,Item_E,20,6.3 +2024-03-12,Store_3,Item_A,21,10.8 +2024-03-12,Store_3,Item_B,13,9.0 +2024-03-12,Store_3,Item_C,17,9.0 +2024-03-12,Store_3,Item_D,17,6.3 +2024-03-12,Store_3,Item_E,15,6.3 +2024-03-13,Store_1,Item_A,39,10.8 +2024-03-13,Store_1,Item_B,33,9.0 +2024-03-13,Store_1,Item_C,33,9.0 +2024-03-13,Store_1,Item_D,35,6.3 +2024-03-13,Store_1,Item_E,26,6.3 +2024-03-13,Store_2,Item_A,30,10.8 +2024-03-13,Store_2,Item_B,28,9.0 +2024-03-13,Store_2,Item_C,24,9.0 +2024-03-13,Store_2,Item_D,15,6.3 +2024-03-13,Store_2,Item_E,15,6.3 +2024-03-13,Store_3,Item_A,22,10.8 +2024-03-13,Store_3,Item_B,18,9.0 +2024-03-13,Store_3,Item_C,21,9.0 +2024-03-13,Store_3,Item_D,13,6.3 +2024-03-13,Store_3,Item_E,15,6.3 +2024-03-14,Store_1,Item_A,40,10.8 +2024-03-14,Store_1,Item_B,34,9.0 +2024-03-14,Store_1,Item_C,20,9.0 +2024-03-14,Store_1,Item_D,28,6.3 +2024-03-14,Store_1,Item_E,25,6.3 +2024-03-14,Store_2,Item_A,32,10.8 +2024-03-14,Store_2,Item_B,9,9.0 +2024-03-14,Store_2,Item_C,32,9.0 +2024-03-14,Store_2,Item_D,15,6.3 +2024-03-14,Store_2,Item_E,19,6.3 +2024-03-14,Store_3,Item_A,17,10.8 +2024-03-14,Store_3,Item_B,20,9.0 +2024-03-14,Store_3,Item_C,14,9.0 +2024-03-14,Store_3,Item_D,11,6.3 +2024-03-14,Store_3,Item_E,17,6.3 +2024-03-15,Store_1,Item_A,76,10.8 +2024-03-15,Store_1,Item_B,68,9.0 +2024-03-15,Store_1,Item_C,77,9.0 +2024-03-15,Store_1,Item_D,50,6.3 +2024-03-15,Store_1,Item_E,47,6.3 +2024-03-15,Store_2,Item_A,56,10.8 +2024-03-15,Store_2,Item_B,65,9.0 +2024-03-15,Store_2,Item_C,43,9.0 +2024-03-15,Store_2,Item_D,32,6.3 +2024-03-15,Store_2,Item_E,37,6.3 +2024-03-15,Store_3,Item_A,51,10.8 +2024-03-15,Store_3,Item_B,43,9.0 +2024-03-15,Store_3,Item_C,39,9.0 +2024-03-15,Store_3,Item_D,19,6.3 +2024-03-15,Store_3,Item_E,32,6.3 +2024-03-16,Store_1,Item_A,43,10.8 +2024-03-16,Store_1,Item_B,50,9.0 +2024-03-16,Store_1,Item_C,40,9.0 +2024-03-16,Store_1,Item_D,32,6.3 +2024-03-16,Store_1,Item_E,35,6.3 +2024-03-16,Store_2,Item_A,40,10.8 +2024-03-16,Store_2,Item_B,34,9.0 +2024-03-16,Store_2,Item_C,41,9.0 +2024-03-16,Store_2,Item_D,24,6.3 +2024-03-16,Store_2,Item_E,21,6.3 +2024-03-16,Store_3,Item_A,36,10.8 +2024-03-16,Store_3,Item_B,31,9.0 +2024-03-16,Store_3,Item_C,19,9.0 +2024-03-16,Store_3,Item_D,19,6.3 +2024-03-16,Store_3,Item_E,20,6.3 +2024-03-17,Store_1,Item_A,51,10.8 +2024-03-17,Store_1,Item_B,58,9.0 +2024-03-17,Store_1,Item_C,44,9.0 +2024-03-17,Store_1,Item_D,32,6.3 +2024-03-17,Store_1,Item_E,30,6.3 +2024-03-17,Store_2,Item_A,36,10.8 +2024-03-17,Store_2,Item_B,23,9.0 +2024-03-17,Store_2,Item_C,21,9.0 +2024-03-17,Store_2,Item_D,28,6.3 +2024-03-17,Store_2,Item_E,17,6.3 +2024-03-17,Store_3,Item_A,23,10.8 +2024-03-17,Store_3,Item_B,13,9.0 +2024-03-17,Store_3,Item_C,21,9.0 +2024-03-17,Store_3,Item_D,12,6.3 +2024-03-17,Store_3,Item_E,22,6.3 +2024-03-18,Store_1,Item_A,42,10.8 +2024-03-18,Store_1,Item_B,25,9.0 +2024-03-18,Store_1,Item_C,42,9.0 +2024-03-18,Store_1,Item_D,24,6.3 +2024-03-18,Store_1,Item_E,19,6.3 +2024-03-18,Store_2,Item_A,11,10.8 +2024-03-18,Store_2,Item_B,28,9.0 +2024-03-18,Store_2,Item_C,14,9.0 +2024-03-18,Store_2,Item_D,9,6.3 +2024-03-18,Store_2,Item_E,16,6.3 +2024-03-18,Store_3,Item_A,27,10.8 +2024-03-18,Store_3,Item_B,19,9.0 +2024-03-18,Store_3,Item_C,17,9.0 +2024-03-18,Store_3,Item_D,12,6.3 +2024-03-18,Store_3,Item_E,12,6.3 +2024-03-19,Store_1,Item_A,39,10.8 +2024-03-19,Store_1,Item_B,29,9.0 +2024-03-19,Store_1,Item_C,28,9.0 +2024-03-19,Store_1,Item_D,19,6.3 +2024-03-19,Store_1,Item_E,16,6.3 +2024-03-19,Store_2,Item_A,21,10.8 +2024-03-19,Store_2,Item_B,12,9.0 +2024-03-19,Store_2,Item_C,18,9.0 +2024-03-19,Store_2,Item_D,10,6.3 +2024-03-19,Store_2,Item_E,10,6.3 +2024-03-19,Store_3,Item_A,18,10.8 +2024-03-19,Store_3,Item_B,19,9.0 +2024-03-19,Store_3,Item_C,17,9.0 +2024-03-19,Store_3,Item_D,7,6.3 +2024-03-19,Store_3,Item_E,8,6.3 +2024-03-20,Store_1,Item_A,39,10.8 +2024-03-20,Store_1,Item_B,22,9.0 +2024-03-20,Store_1,Item_C,26,9.0 +2024-03-20,Store_1,Item_D,21,6.3 +2024-03-20,Store_1,Item_E,15,6.3 +2024-03-20,Store_2,Item_A,22,10.8 +2024-03-20,Store_2,Item_B,13,9.0 +2024-03-20,Store_2,Item_C,16,9.0 +2024-03-20,Store_2,Item_D,13,6.3 +2024-03-20,Store_2,Item_E,8,6.3 +2024-03-20,Store_3,Item_A,19,10.8 +2024-03-20,Store_3,Item_B,14,9.0 +2024-03-20,Store_3,Item_C,16,9.0 +2024-03-20,Store_3,Item_D,10,6.3 +2024-03-20,Store_3,Item_E,13,6.3 +2024-03-21,Store_1,Item_A,18,12.0 +2024-03-21,Store_1,Item_B,13,10.0 +2024-03-21,Store_1,Item_C,15,10.0 +2024-03-21,Store_1,Item_D,11,7.0 +2024-03-21,Store_1,Item_E,10,7.0 +2024-03-21,Store_2,Item_A,12,12.0 +2024-03-21,Store_2,Item_B,8,10.0 +2024-03-21,Store_2,Item_C,8,10.0 +2024-03-21,Store_2,Item_D,5,7.0 +2024-03-21,Store_2,Item_E,9,7.0 +2024-03-21,Store_3,Item_A,7,12.0 +2024-03-21,Store_3,Item_B,6,10.0 +2024-03-21,Store_3,Item_C,6,10.0 +2024-03-21,Store_3,Item_D,6,7.0 +2024-03-21,Store_3,Item_E,5,7.0 +2024-03-22,Store_1,Item_A,22,12.0 +2024-03-22,Store_1,Item_B,12,10.0 +2024-03-22,Store_1,Item_C,15,10.0 +2024-03-22,Store_1,Item_D,8,7.0 +2024-03-22,Store_1,Item_E,14,7.0 +2024-03-22,Store_2,Item_A,9,12.0 +2024-03-22,Store_2,Item_B,10,10.0 +2024-03-22,Store_2,Item_C,11,10.0 +2024-03-22,Store_2,Item_D,7,7.0 +2024-03-22,Store_2,Item_E,6,7.0 +2024-03-22,Store_3,Item_A,9,12.0 +2024-03-22,Store_3,Item_B,8,10.0 +2024-03-22,Store_3,Item_C,7,10.0 +2024-03-22,Store_3,Item_D,6,7.0 +2024-03-22,Store_3,Item_E,8,7.0 +2024-03-23,Store_1,Item_A,23,12.0 +2024-03-23,Store_1,Item_B,16,10.0 +2024-03-23,Store_1,Item_C,22,10.0 +2024-03-23,Store_1,Item_D,23,7.0 +2024-03-23,Store_1,Item_E,13,7.0 +2024-03-23,Store_2,Item_A,23,12.0 +2024-03-23,Store_2,Item_B,19,10.0 +2024-03-23,Store_2,Item_C,13,10.0 +2024-03-23,Store_2,Item_D,11,7.0 +2024-03-23,Store_2,Item_E,13,7.0 +2024-03-23,Store_3,Item_A,13,12.0 +2024-03-23,Store_3,Item_B,11,10.0 +2024-03-23,Store_3,Item_C,10,10.0 +2024-03-23,Store_3,Item_D,6,7.0 +2024-03-23,Store_3,Item_E,7,7.0 +2024-03-24,Store_1,Item_A,20,12.0 +2024-03-24,Store_1,Item_B,22,10.0 +2024-03-24,Store_1,Item_C,19,10.0 +2024-03-24,Store_1,Item_D,16,7.0 +2024-03-24,Store_1,Item_E,10,7.0 +2024-03-24,Store_2,Item_A,19,12.0 +2024-03-24,Store_2,Item_B,17,10.0 +2024-03-24,Store_2,Item_C,9,10.0 +2024-03-24,Store_2,Item_D,11,7.0 +2024-03-24,Store_2,Item_E,13,7.0 +2024-03-24,Store_3,Item_A,11,12.0 +2024-03-24,Store_3,Item_B,16,10.0 +2024-03-24,Store_3,Item_C,13,10.0 +2024-03-24,Store_3,Item_D,7,7.0 +2024-03-24,Store_3,Item_E,8,7.0 +2024-03-25,Store_1,Item_A,20,12.0 +2024-03-25,Store_1,Item_B,15,10.0 +2024-03-25,Store_1,Item_C,17,10.0 +2024-03-25,Store_1,Item_D,8,7.0 +2024-03-25,Store_1,Item_E,11,7.0 +2024-03-25,Store_2,Item_A,8,12.0 +2024-03-25,Store_2,Item_B,10,10.0 +2024-03-25,Store_2,Item_C,8,10.0 +2024-03-25,Store_2,Item_D,6,7.0 +2024-03-25,Store_2,Item_E,9,7.0 +2024-03-25,Store_3,Item_A,12,12.0 +2024-03-25,Store_3,Item_B,9,10.0 +2024-03-25,Store_3,Item_C,6,10.0 +2024-03-25,Store_3,Item_D,5,7.0 +2024-03-25,Store_3,Item_E,7,7.0 +2024-03-26,Store_1,Item_A,19,12.0 +2024-03-26,Store_1,Item_B,23,10.0 +2024-03-26,Store_1,Item_C,17,10.0 +2024-03-26,Store_1,Item_D,11,7.0 +2024-03-26,Store_1,Item_E,10,7.0 +2024-03-26,Store_2,Item_A,13,12.0 +2024-03-26,Store_2,Item_B,10,10.0 +2024-03-26,Store_2,Item_C,12,10.0 +2024-03-26,Store_2,Item_D,8,7.0 +2024-03-26,Store_2,Item_E,6,7.0 +2024-03-26,Store_3,Item_A,7,12.0 +2024-03-26,Store_3,Item_B,5,10.0 +2024-03-26,Store_3,Item_C,9,10.0 +2024-03-26,Store_3,Item_D,4,7.0 +2024-03-26,Store_3,Item_E,3,7.0 +2024-03-27,Store_1,Item_A,21,12.0 +2024-03-27,Store_1,Item_B,24,10.0 +2024-03-27,Store_1,Item_C,16,10.0 +2024-03-27,Store_1,Item_D,13,7.0 +2024-03-27,Store_1,Item_E,11,7.0 +2024-03-27,Store_2,Item_A,13,12.0 +2024-03-27,Store_2,Item_B,13,10.0 +2024-03-27,Store_2,Item_C,10,10.0 +2024-03-27,Store_2,Item_D,8,7.0 +2024-03-27,Store_2,Item_E,8,7.0 +2024-03-27,Store_3,Item_A,9,12.0 +2024-03-27,Store_3,Item_B,10,10.0 +2024-03-27,Store_3,Item_C,8,10.0 +2024-03-27,Store_3,Item_D,5,7.0 +2024-03-27,Store_3,Item_E,6,7.0 +2024-03-28,Store_1,Item_A,17,12.0 +2024-03-28,Store_1,Item_B,17,10.0 +2024-03-28,Store_1,Item_C,17,10.0 +2024-03-28,Store_1,Item_D,14,7.0 +2024-03-28,Store_1,Item_E,13,7.0 +2024-03-28,Store_2,Item_A,13,12.0 +2024-03-28,Store_2,Item_B,13,10.0 +2024-03-28,Store_2,Item_C,14,10.0 +2024-03-28,Store_2,Item_D,9,7.0 +2024-03-28,Store_2,Item_E,8,7.0 +2024-03-28,Store_3,Item_A,8,12.0 +2024-03-28,Store_3,Item_B,7,10.0 +2024-03-28,Store_3,Item_C,8,10.0 +2024-03-28,Store_3,Item_D,5,7.0 +2024-03-28,Store_3,Item_E,6,7.0 +2024-03-29,Store_1,Item_A,22,12.0 +2024-03-29,Store_1,Item_B,20,10.0 +2024-03-29,Store_1,Item_C,17,10.0 +2024-03-29,Store_1,Item_D,13,7.0 +2024-03-29,Store_1,Item_E,12,7.0 +2024-03-29,Store_2,Item_A,8,12.0 +2024-03-29,Store_2,Item_B,9,10.0 +2024-03-29,Store_2,Item_C,11,10.0 +2024-03-29,Store_2,Item_D,6,7.0 +2024-03-29,Store_2,Item_E,9,7.0 +2024-03-29,Store_3,Item_A,15,12.0 +2024-03-29,Store_3,Item_B,12,10.0 +2024-03-29,Store_3,Item_C,9,10.0 +2024-03-29,Store_3,Item_D,8,7.0 +2024-03-29,Store_3,Item_E,6,7.0 +2024-03-30,Store_1,Item_A,31,12.0 +2024-03-30,Store_1,Item_B,25,10.0 +2024-03-30,Store_1,Item_C,26,10.0 +2024-03-30,Store_1,Item_D,18,7.0 +2024-03-30,Store_1,Item_E,20,7.0 +2024-03-30,Store_2,Item_A,23,12.0 +2024-03-30,Store_2,Item_B,19,10.0 +2024-03-30,Store_2,Item_C,20,10.0 +2024-03-30,Store_2,Item_D,16,7.0 +2024-03-30,Store_2,Item_E,15,7.0 +2024-03-30,Store_3,Item_A,12,12.0 +2024-03-30,Store_3,Item_B,7,10.0 +2024-03-30,Store_3,Item_C,18,10.0 +2024-03-30,Store_3,Item_D,7,7.0 +2024-03-30,Store_3,Item_E,10,7.0 diff --git a/retail-forecast/materializers/__init__.py b/retail-forecast/materializers/__init__.py new file mode 100644 index 00000000..3a896209 --- /dev/null +++ b/retail-forecast/materializers/__init__.py @@ -0,0 +1,3 @@ +from .prophet_materializer import ProphetMaterializer + +__all__ = ["ProphetMaterializer"] \ No newline at end of file diff --git a/retail-forecast/materializers/prophet_materializer.py b/retail-forecast/materializers/prophet_materializer.py new file mode 100644 index 00000000..2f42c091 --- /dev/null +++ b/retail-forecast/materializers/prophet_materializer.py @@ -0,0 +1,156 @@ +import json +import os +from typing import Any, Dict, Type + +from prophet import Prophet +from prophet.serialize import model_from_json, model_to_json +from zenml.enums import ArtifactType +from zenml.materializers.base_materializer import BaseMaterializer + + +class ProphetMaterializer(BaseMaterializer): + """Materializer for Prophet models.""" + + ASSOCIATED_TYPES = (Prophet, dict) + ASSOCIATED_ARTIFACT_TYPE = ArtifactType.MODEL + + def load(self, data_type: Type[Any]) -> Any: + """Load a Prophet model or dictionary of Prophet models from storage. + + Args: + data_type: The data type to load. + + Returns: + A Prophet model or dictionary of Prophet models. + """ + # Check if we're loading a dictionary + if data_type == dict: + # Path to the keys file + keys_path = os.path.join(self.uri, "keys.json") + + # Load the keys + with self.artifact_store.open(keys_path, "r") as f: + keys = json.load(f) + + # Load each model + result = {} + for key in keys: + model_dir = os.path.join(self.uri, "models", key) + model_path = os.path.join(model_dir, "model.json") + + # Load the model JSON + with self.artifact_store.open(model_path, "r") as f: + model_json = f.read() + + # Create the Prophet model + result[key] = model_from_json(model_json) + + return result + else: + # Path to the serialized model + model_path = os.path.join(self.uri, "model.json") + + # Load the serialized model + with self.artifact_store.open(model_path, "r") as f: + model_json = f.read() + + # Create a new Prophet model from the JSON + model = model_from_json(model_json) + + return model + + def save(self, data: Any) -> None: + """Save a Prophet model or dictionary of Prophet models to storage. + + Args: + data: The Prophet model or dictionary of Prophet models to save. + """ + # Check if we're saving a dictionary + if isinstance(data, dict): + # First check if the dictionary contains Prophet models + if not all(isinstance(model, Prophet) for model in data.values()): + # If this is just a regular dictionary, use the default dictionary materializer + # by raising a ValueError + raise ValueError( + "This materializer only supports dictionaries of Prophet models." + ) + + # Save the keys + keys_path = os.path.join(self.uri, "keys.json") + with self.artifact_store.open(keys_path, "w") as f: + json.dump(list(data.keys()), f) + + # Save each model + for key, model in data.items(): + # Create a directory for this model + model_dir = os.path.join(self.uri, "models", key) + os.makedirs(os.path.dirname(model_dir), exist_ok=True) + + # Serialize the model to JSON + model_json = model_to_json(model) + + # Save the serialized model + model_path = os.path.join(model_dir, "model.json") + with self.artifact_store.open(model_path, "w") as f: + f.write(model_json) + else: + # Path to save the serialized model + model_path = os.path.join(self.uri, "model.json") + + # Serialize the model to JSON + model_json = model_to_json(data) + + # Save the serialized model + with self.artifact_store.open(model_path, "w") as f: + f.write(model_json) + + def extract_metadata(self, data: Any) -> Dict[str, Any]: + """Extract metadata from the Prophet model or dictionary of Prophet models. + + Args: + data: The Prophet model or dictionary of Prophet models to extract metadata from. + + Returns: + A dictionary of metadata. + """ + # Check if we're extracting metadata from a dictionary + if isinstance(data, dict): + # Extract metadata for each model + models_metadata = {} + for key, model in data.items(): + models_metadata[key] = self._extract_model_metadata(model) + + metadata = { + "model_type": "prophet_dictionary", + "num_models": len(data), + "models": models_metadata, + } + return metadata + else: + # Extract metadata for a single model + return self._extract_model_metadata(data) + + def _extract_model_metadata(self, model: Prophet) -> Dict[str, Any]: + """Extract metadata from a single Prophet model. + + Args: + model: The Prophet model to extract metadata from. + + Returns: + A dictionary of metadata. + """ + metadata = { + "model_type": "prophet", + "seasonality_mode": model.seasonality_mode, + "growth": model.growth, + } + + # Add information about seasonalities if available + if hasattr(model, "seasonalities") and model.seasonalities: + metadata["seasonalities"] = list(model.seasonalities.keys()) + + # Add information about regressors if available + if hasattr(model, "extra_regressors") and model.extra_regressors: + metadata["regressors"] = len(model.extra_regressors) + + return metadata diff --git a/retail-forecast/pipelines/inference_pipeline.py b/retail-forecast/pipelines/inference_pipeline.py new file mode 100644 index 00000000..670536c7 --- /dev/null +++ b/retail-forecast/pipelines/inference_pipeline.py @@ -0,0 +1,55 @@ +from steps.data_loader import load_data +from steps.data_preprocessor import preprocess_data +from steps.data_visualizer import visualize_sales_data +from steps.predictor import generate_forecasts +from zenml import get_pipeline_context, pipeline + + +@pipeline(name="retail_forecast_inference_pipeline") +def inference_pipeline(): + """Pipeline to make retail demand forecasts using trained Prophet models. + + This pipeline is for when you already have trained models and want to + generate new forecasts without retraining. + + Steps: + 1. Load sales data + 2. Preprocess data + 3. Generate forecasts using provided models or simple baseline models + + Returns: + combined_forecast: Combined dataframe with all series forecasts + forecast_dashboard: HTML dashboard with forecast visualizations + sales_visualization: Interactive visualization of historical sales patterns + """ + # Load data + sales_data = load_data() + + # Preprocess data + train_data_dict, test_data_dict, series_ids = preprocess_data( + sales_data=sales_data, + test_size=0.05, # Just a small test set for visualization purposes + ) + + # Create interactive visualizations of historical sales patterns + sales_viz = visualize_sales_data( + sales_data=sales_data, + train_data_dict=train_data_dict, + test_data_dict=test_data_dict, + series_ids=series_ids, + ) + + # Get the models from the Model Registry + models = get_pipeline_context().model.get_artifact( + "trained_prophet_models" + ) + + # Generate forecasts + _, combined_forecast, forecast_dashboard = generate_forecasts( + models=models, + train_data_dict=train_data_dict, + series_ids=series_ids, + ) + + # Return forecast data and dashboard + return combined_forecast, forecast_dashboard, sales_viz diff --git a/retail-forecast/pipelines/training_pipeline.py b/retail-forecast/pipelines/training_pipeline.py new file mode 100644 index 00000000..26a68f7d --- /dev/null +++ b/retail-forecast/pipelines/training_pipeline.py @@ -0,0 +1,76 @@ +from typing import Dict, Tuple + +from steps.data_loader import load_data +from steps.data_preprocessor import preprocess_data +from steps.data_visualizer import visualize_sales_data +from steps.model_evaluator import evaluate_models +from steps.model_trainer import train_model +from steps.predictor import generate_forecasts +from typing_extensions import Annotated +from zenml import pipeline +from zenml.types import HTMLString + + +@pipeline(name="retail_forecast_pipeline") +def training_pipeline() -> Tuple[ + Annotated[Dict[str, float], "model_metrics"], + Annotated[HTMLString, "evaluation_report"], + Annotated[HTMLString, "forecast_dashboard"], + Annotated[HTMLString, "sales_visualization"], +]: + """Simple retail forecasting pipeline using Prophet. + + Steps: + 1. Load sales data + 2. Preprocess data for Prophet + 3. Visualize historical sales patterns (interactive) + 4. Train Prophet models (one per store-item combination) + 5. Evaluate model performance on test data + 6. Generate forecasts for future periods + + Args: + test_size: Proportion of data to use for testing + forecast_periods: Number of days to forecast into the future + weekly_seasonality: Whether to include weekly seasonality in the model + + Returns: + model_metrics: Dictionary of performance metrics + evaluation_report: HTML report of model evaluation + forecast_dashboard: HTML dashboard of forecasts + sales_visualization: Interactive visualization of historical sales patterns + """ + # Load synthetic retail data + sales_data = load_data() + + # Preprocess data for Prophet + train_data_dict, test_data_dict, series_ids = preprocess_data( + sales_data=sales_data + ) + + # Create interactive visualizations of historical sales patterns + sales_viz = visualize_sales_data( + sales_data=sales_data, + train_data_dict=train_data_dict, + test_data_dict=test_data_dict, + series_ids=series_ids, + ) + + # Train Prophet models for each series + models = train_model( + train_data_dict=train_data_dict, + series_ids=series_ids, + ) + + # Evaluate models + metrics, evaluation_report = evaluate_models( + models=models, test_data_dict=test_data_dict, series_ids=series_ids + ) + + # Generate forecasts + _, _, forecast_dashboard = generate_forecasts( + models=models, + train_data_dict=train_data_dict, + series_ids=series_ids, + ) + + return metrics, evaluation_report, forecast_dashboard, sales_viz diff --git a/retail-forecast/requirements.txt b/retail-forecast/requirements.txt new file mode 100644 index 00000000..ab8b37cc --- /dev/null +++ b/retail-forecast/requirements.txt @@ -0,0 +1,9 @@ +zenml~=0.82.0 +numpy>=1.20.0 +pandas>=1.3.0 +matplotlib>=3.5.0 +prophet>=1.1.0 +typing_extensions>=4.0.0 +pyarrow +fastparquet +plotly \ No newline at end of file diff --git a/retail-forecast/run.py b/retail-forecast/run.py new file mode 100644 index 00000000..a21c8def --- /dev/null +++ b/retail-forecast/run.py @@ -0,0 +1,107 @@ +import click +import logging +from pipelines.inference_pipeline import inference_pipeline +from pipelines.training_pipeline import training_pipeline +from zenml import Model +from logging_config import configure_logging + +logger = logging.getLogger(__name__) + + +@click.command( + help=""" +RetailForecast - Simple Retail Demand Forecasting with ZenML and Prophet + +Run a simplified retail demand forecasting pipeline using Facebook Prophet. + +Examples: + + \b + # Run the training pipeline with default parameters + python run.py + + \b + # Run with custom parameters + python run.py --forecast-periods 60 --test-size 0.3 + + \b + # Run the inference pipeline + python run.py --inference +""" +) +@click.option( + "--no-cache", + is_flag=True, + default=False, + help="Disable caching for the pipeline run", +) +@click.option( + "--inference", + is_flag=True, + default=False, + help="Run the inference pipeline instead of the training pipeline", +) +@click.option( + "--log-file", + type=str, + default=None, + help="Path to log file (if not provided, logs only go to console)", +) +@click.option( + "--debug", + is_flag=True, + default=False, + help="Enable debug logging", +) +def main( + no_cache: bool = False, + inference: bool = False, + log_file: str = None, + debug: bool = False, +): + """Run a simplified retail forecasting pipeline with ZenML. + + Args: + no_cache: Disable caching for the pipeline run + inference: Run the inference pipeline instead of the training pipeline + log_file: Path to log file + debug: Enable debug logging + """ + # Configure logging + log_level = logging.DEBUG if debug else logging.INFO + configure_logging(level=log_level, log_file=log_file) + + pipeline_options = {} + if no_cache: + pipeline_options["enable_cache"] = False + + logger.info("\n" + "=" * 80) + # Run the appropriate pipeline + if inference: + logger.info("Running retail forecasting inference pipeline...") + + # Create a new version of the model + model = Model( + name="retail_forecast_model", + description="A retail forecast model trained on the sales data", + version="production", + ) + inference_pipeline.with_options(model=model, **pipeline_options)() + else: + # Create a new version of the model + model = Model( + name="retail_forecast_model", + description="A retail forecast model trained on the sales data", + ) + + logger.info("Running retail forecasting training pipeline...") + training_pipeline.with_options(model=model, **pipeline_options)() + logger.info("=" * 80 + "\n") + + logger.info("\n" + "=" * 80) + logger.info("Pipeline completed successfully!") + logger.info("=" * 80 + "\n") + + +if __name__ == "__main__": + main() diff --git a/retail-forecast/steps/data_loader.py b/retail-forecast/steps/data_loader.py new file mode 100644 index 00000000..bbf0dfa8 --- /dev/null +++ b/retail-forecast/steps/data_loader.py @@ -0,0 +1,87 @@ +import os +import logging + +import numpy as np +import pandas as pd +from zenml import step + +logger = logging.getLogger(__name__) + + +@step +def load_data() -> pd.DataFrame: + """Load synthetic retail sales data for forecasting.""" + data_dir = os.path.join(os.getcwd(), "data") + sales_path = os.path.join(data_dir, "sales.csv") + + if os.path.exists(sales_path): + # Load real data if available + sales_df = pd.read_csv(sales_path) + logger.info(f"Loaded {len(sales_df)} sales records from file.") + else: + logger.info("Generating synthetic retail sales data...") + # Create synthetic dataset with retail patterns + np.random.seed(42) # For reproducibility + + # Generate date range for 3 months + date_range = pd.date_range("2024-01-01", periods=90, freq="D") + + # Create stores and items + stores = ["Store_1", "Store_2"] + items = ["Item_A", "Item_B", "Item_C"] + + records = [] + for date in date_range: + # Calendar features + is_weekend = 1 if date.dayofweek >= 5 else 0 + is_holiday = 1 if date.day == 1 or date.day == 15 else 0 + is_promo = 1 if 10 <= date.day <= 20 else 0 + + for store in stores: + for item in items: + # Base demand with factors + base_demand = 100 + store_factor = 1.5 if store == "Store_1" else 0.8 + item_factor = ( + 1.2 + if item == "Item_A" + else 1.0 + if item == "Item_B" + else 0.7 + ) + weekday_factor = 1.5 if is_weekend else 1.0 + holiday_factor = 2.0 if is_holiday else 1.0 + promo_factor = 1.8 if is_promo else 1.0 + + # Add random noise + noise = np.random.normal(1, 0.1) + + # Calculate final sales + sales = int( + base_demand + * store_factor + * item_factor + * weekday_factor + * holiday_factor + * promo_factor + * noise + ) + sales = max(0, sales) + + records.append( + { + "date": date, + "store": store, + "item": item, + "sales": sales, + } + ) + + # Create DataFrame + sales_df = pd.DataFrame(records) + + # Save synthetic data + os.makedirs(data_dir, exist_ok=True) + sales_df.to_csv(sales_path, index=False) + + return sales_df diff --git a/retail-forecast/steps/data_preprocessor.py b/retail-forecast/steps/data_preprocessor.py new file mode 100644 index 00000000..919c4e8d --- /dev/null +++ b/retail-forecast/steps/data_preprocessor.py @@ -0,0 +1,116 @@ +from typing import Dict, List, Tuple +import logging + +import pandas as pd +from typing_extensions import Annotated +from zenml import step + +logger = logging.getLogger(__name__) + + +@step +def preprocess_data( + sales_data: pd.DataFrame, + test_size: float = 0.2, +) -> Tuple[ + Annotated[Dict[str, pd.DataFrame], "training_data"], + Annotated[Dict[str, pd.DataFrame], "testing_data"], + Annotated[List[str], "series_identifiers"], +]: + """Prepare data for forecasting with Prophet. + + Args: + sales_data: Raw sales data with date, store, item, and sales columns + test_size: Proportion of data to use for testing + + Returns: + train_data_dict: Dictionary of training dataframes for each series + test_data_dict: Dictionary of test dataframes for each series + series_ids: List of unique series identifiers (store-item combinations) + """ + logger.info(f"Preprocessing sales data with shape: {sales_data.shape}") + + # Convert date to datetime + sales_data["date"] = pd.to_datetime(sales_data["date"]) + + # Create unique series ID for each store-item combination + sales_data["series_id"] = sales_data["store"] + "-" + sales_data["item"] + + # Get list of unique series + series_ids = sales_data["series_id"].unique().tolist() + logger.info(f"Found {len(series_ids)} unique store-item combinations") + + # Create Prophet-formatted dataframes (ds, y) for each series + train_data_dict = {} + test_data_dict = {} + + for series_id in series_ids: + # Filter data for this series + series_data = sales_data[sales_data["series_id"] == series_id].copy() + + # Sort by date and drop any duplicates + series_data = series_data.sort_values("date").drop_duplicates( + subset=["date"] + ) + + # Rename columns for Prophet + prophet_data = series_data[["date", "sales"]].rename( + columns={"date": "ds", "sales": "y"} + ) + + # Ensure no NaN values + prophet_data = prophet_data.dropna() + + if len(prophet_data) < 2: + logger.info( + f"WARNING: Not enough data for series {series_id}, skipping" + ) + continue + + # Make sure we have at least one point in test set + min_test_size = max(1, int(len(prophet_data) * test_size)) + + if len(prophet_data) <= min_test_size: + # If we don't have enough data, use half for training and half for testing + cutoff_idx = len(prophet_data) // 2 + else: + cutoff_idx = len(prophet_data) - min_test_size + + # Split into train and test + train_data = prophet_data.iloc[:cutoff_idx].copy() + test_data = prophet_data.iloc[cutoff_idx:].copy() + + # Ensure we have data in both splits + if len(train_data) == 0 or len(test_data) == 0: + logger.info( + f"WARNING: Empty split for series {series_id}, skipping" + ) + continue + + # Store in dictionaries + train_data_dict[series_id] = train_data + test_data_dict[series_id] = test_data + + logger.info( + f"Series {series_id}: {len(train_data)} train points, {len(test_data)} test points" + ) + + if not train_data_dict: + raise ValueError("No valid series data after preprocessing!") + + # Get a sample series to print details + sample_id = next(iter(train_data_dict)) + sample_train = train_data_dict[sample_id] + sample_test = test_data_dict[sample_id] + + logger.info(f"Sample series {sample_id}:") + logger.info(f" Train data shape: {sample_train.shape}") + logger.info( + f" Train date range: {sample_train['ds'].min()} to {sample_train['ds'].max()}" + ) + logger.info(f" Test data shape: {sample_test.shape}") + logger.info( + f" Test date range: {sample_test['ds'].min()} to {sample_test['ds'].max()}" + ) + + return train_data_dict, test_data_dict, series_ids diff --git a/retail-forecast/steps/data_validator.py b/retail-forecast/steps/data_validator.py new file mode 100644 index 00000000..20b5eba3 --- /dev/null +++ b/retail-forecast/steps/data_validator.py @@ -0,0 +1,112 @@ +from typing import Tuple +import logging + +import pandas as pd +from typing_extensions import Annotated +from zenml import step + +logger = logging.getLogger(__name__) + + +@step +def validate_data( + sales_data: pd.DataFrame, calendar_data: pd.DataFrame +) -> Tuple[ + Annotated[pd.DataFrame, "sales_data_validated"], + Annotated[pd.DataFrame, "calendar_data_validated"], +]: + """Validate retail sales data, checking for common issues like: + - Missing values + - Negative sales + - Duplicate records + - Date continuity + - Extreme outliers + """ + sales_df = sales_data + calendar_df = calendar_data + + # Check for missing values in critical fields + for df_name, df in [("Sales", sales_df), ("Calendar", calendar_df)]: + if df.isnull().any().any(): + missing_cols = df.columns[df.isnull().any()].tolist() + logger.info( + f"Warning: {df_name} data contains missing values in columns: {missing_cols}" + ) + # Fill missing values appropriately based on column type + for col in missing_cols: + if pd.api.types.is_numeric_dtype(df[col]): + # For numeric columns, fill with median + df[col] = df[col].fillna(df[col].median()) + else: + # For categorical/text columns, fill with most common value + df[col] = df[col].fillna( + df[col].mode()[0] + if not df[col].mode().empty + else "UNKNOWN" + ) + + # Check for and fix negative sales (a common data quality issue in retail) + neg_sales = (sales_df["sales"] < 0).sum() + if neg_sales > 0: + logger.info( + f"Warning: Found {neg_sales} records with negative sales. Setting to zero." + ) + sales_df.loc[sales_df["sales"] < 0, "sales"] = 0 + + # Check for duplicate records + duplicates = sales_df.duplicated(subset=["date", "store", "item"]).sum() + if duplicates > 0: + logger.info( + f"Warning: Found {duplicates} duplicate store-item-date records. Keeping the last one." + ) + sales_df = sales_df.drop_duplicates( + subset=["date", "store", "item"], keep="last" + ) + + # Check for date continuity in calendar + calendar_df["date"] = pd.to_datetime(calendar_df["date"]) + date_diff = calendar_df["date"].diff().dropna() + if not (date_diff == pd.Timedelta(days=1)).all(): + logger.info( + "Warning: Calendar dates are not continuous. Some days may be missing." + ) + + # Detect extreme outliers (values > 3 std from mean within each item-store combination) + sales_df["date"] = pd.to_datetime(sales_df["date"]) + outlier_count = 0 + + # Group by store and item to identify outliers within each time series + for (store, item), group in sales_df.groupby(["store", "item"]): + mean_sales = group["sales"].mean() + std_sales = group["sales"].std() + + if std_sales > 0: # Avoid division by zero + # Calculate z-score + z_scores = (group["sales"] - mean_sales) / std_sales + + # Flag extreme outliers (|z| > 3) + outlier_mask = abs(z_scores) > 3 + outlier_count += outlier_mask.sum() + + # Cap outliers (winsorize) rather than removing them + if outlier_mask.any(): + cap_upper = mean_sales + 3 * std_sales + sales_df.loc[ + group[outlier_mask & (group["sales"] > cap_upper)].index, + "sales", + ] = cap_upper + + if outlier_count > 0: + logger.info( + f"Warning: Detected and capped {outlier_count} extreme sales outliers." + ) + + # Ensure all dates in sales exist in calendar + if not set(sales_df["date"].dt.date).issubset( + set(calendar_df["date"].dt.date) + ): + logger.info( + "Warning: Some sales dates don't exist in the calendar data." + ) + + return sales_df, calendar_df diff --git a/retail-forecast/steps/data_visualizer.py b/retail-forecast/steps/data_visualizer.py new file mode 100644 index 00000000..b40620c9 --- /dev/null +++ b/retail-forecast/steps/data_visualizer.py @@ -0,0 +1,311 @@ +from typing import Dict, List +import logging + +import numpy as np +import pandas as pd +import plotly.express as px +import plotly.graph_objects as go +from plotly.subplots import make_subplots +from typing_extensions import Annotated +from zenml import step +from zenml.types import HTMLString + +logger = logging.getLogger(__name__) + + +@step +def visualize_sales_data( + sales_data: pd.DataFrame, + train_data_dict: Dict[str, pd.DataFrame], + test_data_dict: Dict[str, pd.DataFrame], + series_ids: List[str], +) -> Annotated[HTMLString, "sales_visualization"]: + """Create interactive visualizations of historical sales patterns. + + Args: + sales_data: Raw sales data with date, store, item, and sales columns + train_data_dict: Dictionary of training dataframes for each series + test_data_dict: Dictionary of test dataframes for each series + series_ids: List of unique series identifiers + + Returns: + HTML visualization dashboard of historical sales patterns + """ + # Ensure date column is in datetime format + sales_data = sales_data.copy() + sales_data["date"] = pd.to_datetime(sales_data["date"]) + + # Create HTML with multiple visualizations + html_parts = [] + html_parts.append(""" + + + + + +
+
+

Retail Sales Historical Data Analysis

+

Interactive visualization of sales patterns across stores and products.

+
+ """) + + # Create overview metrics + total_sales = sales_data["sales"].sum() + avg_daily_sales = sales_data.groupby("date")["sales"].sum().mean() + num_stores = sales_data["store"].nunique() + num_items = sales_data["item"].nunique() + min_date = sales_data["date"].min().strftime("%Y-%m-%d") + max_date = sales_data["date"].max().strftime("%Y-%m-%d") + date_range = f"{min_date} to {max_date}" + + html_parts.append(f""" +
+

Dataset Overview

+
+
+

Total Sales

+

{total_sales:,.0f} units

+
+
+

Avg. Daily Sales

+

{avg_daily_sales:,.1f} units

+
+
+

Stores × Items

+

{num_stores} × {num_items}

+
+
+

Date Range

+

{date_range}

+
+
+
+ """) + + # 1. Time Series - Overall Sales Trend + df_daily = sales_data.groupby("date")["sales"].sum().reset_index() + fig_trend = px.line( + df_daily, + x="date", + y="sales", + title="Daily Total Sales Across All Stores and Products", + template="plotly_white", + ) + fig_trend.update_traces(line=dict(width=2)) + fig_trend.update_layout( + xaxis_title="Date", yaxis_title="Total Sales (units)", height=500 + ) + trend_html = fig_trend.to_html(full_html=False, include_plotlyjs="cdn") + html_parts.append(f""" +
+

Overall Sales Trend

+ {trend_html} +
+

Insights: Observe weekly patterns and special events that impact overall sales volume.

+
+
+ """) + + # 2. Store Comparison + store_sales = ( + sales_data.groupby(["date", "store"])["sales"].sum().reset_index() + ) + fig_stores = px.line( + store_sales, + x="date", + y="sales", + color="store", + title="Sales Comparison by Store", + template="plotly_white", + ) + fig_stores.update_layout( + xaxis_title="Date", yaxis_title="Total Sales (units)", height=500 + ) + stores_html = fig_stores.to_html(full_html=False, include_plotlyjs="cdn") + html_parts.append(f""" +
+

Store Comparison

+ {stores_html} +
+

Insights: Compare performance across different stores to identify top performers and potential issues.

+
+
+ """) + + # 3. Product Performance + item_sales = ( + sales_data.groupby(["date", "item"])["sales"].sum().reset_index() + ) + fig_items = px.line( + item_sales, + x="date", + y="sales", + color="item", + title="Sales Comparison by Product", + template="plotly_white", + ) + fig_items.update_layout( + xaxis_title="Date", yaxis_title="Total Sales (units)", height=500 + ) + items_html = fig_items.to_html(full_html=False, include_plotlyjs="cdn") + html_parts.append(f""" +
+

Product Performance

+ {items_html} +
+

Insights: Identify best-selling products and those with unique seasonal patterns.

+
+
+ """) + + # 4. Weekly Patterns + sales_data["day_of_week"] = sales_data["date"].dt.day_name() + day_order = [ + "Monday", + "Tuesday", + "Wednesday", + "Thursday", + "Friday", + "Saturday", + "Sunday", + ] + weekly_pattern = ( + sales_data.groupby("day_of_week")["sales"] + .mean() + .reindex(day_order) + .reset_index() + ) + + fig_weekly = px.bar( + weekly_pattern, + x="day_of_week", + y="sales", + title="Average Sales by Day of Week", + template="plotly_white", + color="sales", + color_continuous_scale="Blues", + ) + fig_weekly.update_layout( + xaxis_title="", yaxis_title="Average Sales (units)", height=500 + ) + weekly_html = fig_weekly.to_html(full_html=False, include_plotlyjs="cdn") + html_parts.append(f""" +
+

Weekly Patterns

+ {weekly_html} +
+

Insights: Identify peak sales days to optimize inventory and staffing.

+
+
+ """) + + # 5. Sample Store-Item Combinations + # Select 3 random series to display + sample_series = np.random.choice( + series_ids, size=min(3, len(series_ids)), replace=False + ) + + # Create subplots for train/test visualization + fig_samples = make_subplots( + rows=len(sample_series), + cols=1, + subplot_titles=[f"Series: {series_id}" for series_id in sample_series], + shared_xaxes=True, + vertical_spacing=0.1, + ) + + for i, series_id in enumerate(sample_series): + train_data = train_data_dict[series_id] + test_data = test_data_dict[series_id] + + # Add train data + fig_samples.add_trace( + go.Scatter( + x=train_data["ds"], + y=train_data["y"], + mode="lines+markers", + name=f"{series_id} (Training)", + line=dict(color="blue"), + legendgroup=series_id, + showlegend=(i == 0), + ), + row=i + 1, + col=1, + ) + + # Add test data + fig_samples.add_trace( + go.Scatter( + x=test_data["ds"], + y=test_data["y"], + mode="lines+markers", + name=f"{series_id} (Test)", + line=dict(color="green"), + legendgroup=series_id, + showlegend=(i == 0), + ), + row=i + 1, + col=1, + ) + + fig_samples.update_layout( + height=300 * len(sample_series), + title_text="Train/Test Split for Sample Series", + template="plotly_white", + ) + + samples_html = fig_samples.to_html(full_html=False, include_plotlyjs="cdn") + html_parts.append(f""" +
+

Sample Series with Train/Test Split

+ {samples_html} +
+

Insights: Visualize how historical data is split into training and testing sets for model evaluation.

+
+
+ """) + + # Close HTML document + html_parts.append(""" +
+ + + """) + + # Combine all HTML parts + complete_html = "".join(html_parts) + + # Return as HTMLString + return HTMLString(complete_html) diff --git a/retail-forecast/steps/model_evaluator.py b/retail-forecast/steps/model_evaluator.py new file mode 100644 index 00000000..9fdcc3fa --- /dev/null +++ b/retail-forecast/steps/model_evaluator.py @@ -0,0 +1,303 @@ +import base64 +from io import BytesIO +from typing import Dict, List, Tuple +import logging + +import matplotlib.pyplot as plt +import numpy as np +import pandas as pd +from prophet import Prophet +from typing_extensions import Annotated +from zenml import log_metadata, step +from zenml.types import HTMLString + +logger = logging.getLogger(__name__) + + +@step +def evaluate_models( + models: Dict[str, Prophet], + test_data_dict: Dict[str, pd.DataFrame], + series_ids: List[str], + forecast_horizon: int = 7, +) -> Tuple[ + Annotated[Dict[str, float], "performance_metrics"], + Annotated[HTMLString, "evaluation_report"], +]: + """Evaluate Prophet models on test data and log metrics. + + Args: + models: Dictionary of trained Prophet models + test_data_dict: Dictionary of test data for each series + series_ids: List of series identifiers + forecast_horizon: Number of future time periods to forecast + + Returns: + performance_metrics: Dictionary of average metrics across all series + evaluation_report: HTML report with evaluation metrics and visualizations + """ + # Initialize metrics storage + all_metrics = {"mae": [], "rmse": [], "mape": []} + + series_metrics = {} + + # Create a figure for plotting forecasts + plt.figure(figsize=(12, len(series_ids) * 4)) + + for i, series_id in enumerate(series_ids): + logger.info(f"Evaluating model for {series_id}...") + model = models[series_id] + test_data = test_data_dict[series_id] + + # Debug: Check that test data exists + logger.info(f"Test data shape for {series_id}: {test_data.shape}") + logger.info( + f"Test data date range: {test_data['ds'].min()} to {test_data['ds'].max()}" + ) + + # Create future dataframe starting from the FIRST test date, not from training data + future_dates = test_data["ds"].unique() + if len(future_dates) == 0: + logger.info( + f"WARNING: No test data dates for {series_id}, skipping evaluation" + ) + continue + + # Make predictions for test dates + forecast = model.predict(pd.DataFrame({"ds": future_dates})) + + # Print debug info + logger.info(f"Forecast shape: {forecast.shape}") + logger.info( + f"Forecast date range: {forecast['ds'].min()} to {forecast['ds'].max()}" + ) + + # Merge forecasts with test data correctly + merged_data = pd.merge( + test_data, + forecast[["ds", "yhat", "yhat_lower", "yhat_upper"]], + on="ds", + how="inner", # Only keep matching dates + ) + + logger.info(f"Merged data shape: {merged_data.shape}") + if merged_data.empty: + logger.info( + f"WARNING: No matching dates between test data and forecast for {series_id}" + ) + continue + + # Calculate metrics only if we have merged data + if len(merged_data) > 0: + # Calculate metrics + actuals = merged_data["y"].values + predictions = merged_data["yhat"].values + + # Debug metrics calculation + logger.info(f"Actuals range: {actuals.min()} to {actuals.max()}") + logger.info( + f"Predictions range: {predictions.min()} to {predictions.max()}" + ) + + mae = np.mean(np.abs(actuals - predictions)) + rmse = np.sqrt(np.mean((actuals - predictions) ** 2)) + + # Handle zeros in actuals for MAPE calculation + mask = actuals != 0 + if np.any(mask): + mape = ( + np.mean( + np.abs( + (actuals[mask] - predictions[mask]) / actuals[mask] + ) + ) + * 100 + ) + else: + mape = np.nan + + # Store metrics + series_metrics[series_id] = { + "mae": mae, + "rmse": rmse, + "mape": mape, + } + + all_metrics["mae"].append(mae) + all_metrics["rmse"].append(rmse) + if not np.isnan(mape): + all_metrics["mape"].append(mape) + + logger.info( + f"Metrics for {series_id}: MAE={mae:.2f}, RMSE={rmse:.2f}, MAPE={mape:.2f}%" + ) + + # Plot the forecast vs actual for this series + plt.subplot(len(series_ids), 1, i + 1) + plt.plot(merged_data["ds"], merged_data["y"], "b.", label="Actual") + plt.plot( + merged_data["ds"], merged_data["yhat"], "r-", label="Forecast" + ) + plt.fill_between( + merged_data["ds"], + merged_data["yhat_lower"], + merged_data["yhat_upper"], + color="gray", + alpha=0.2, + ) + plt.title(f"Forecast vs Actual for {series_id}") + plt.legend() + + # Calculate average metrics across all series + if not all_metrics["mae"]: + logger.info("WARNING: No valid metrics calculated!") + average_metrics = { + "avg_mae": np.nan, + "avg_rmse": np.nan, + "avg_mape": np.nan, + } + else: + average_metrics = { + "avg_mae": np.mean(all_metrics["mae"]), + "avg_rmse": np.mean(all_metrics["rmse"]), + "avg_mape": np.mean(all_metrics["mape"]) + if all_metrics["mape"] + else np.nan, + } + + # Save plot to buffer + buf = BytesIO() + plt.tight_layout() + plt.savefig(buf, format="png") + buf.seek(0) + plot_data = base64.b64encode(buf.read()).decode("utf-8") + plt.close() + + # Log metrics to ZenML + log_metadata( + metadata={ + "avg_mae": float(average_metrics["avg_mae"]) + if not np.isnan(average_metrics["avg_mae"]) + else 0.0, + "avg_rmse": float(average_metrics["avg_rmse"]) + if not np.isnan(average_metrics["avg_rmse"]) + else 0.0, + "avg_mape": float(average_metrics["avg_mape"]) + if not np.isnan(average_metrics["avg_mape"]) + else 0.0, + } + ) + + logger.info(f"Final Average MAE: {average_metrics['avg_mae']:.2f}") + logger.info(f"Final Average RMSE: {average_metrics['avg_rmse']:.2f}") + logger.info( + f"Final Average MAPE: {average_metrics['avg_mape']:.2f}%" + if not np.isnan(average_metrics["avg_mape"]) + else "Final Average MAPE: N/A" + ) + + # Create HTML report + html_report = create_evaluation_report( + average_metrics, series_metrics, plot_data + ) + + return average_metrics, html_report + + +def create_evaluation_report(average_metrics, series_metrics, plot_image_data): + """Create an HTML report for model evaluation.""" + # Create a table for series-specific metrics + series_rows = "" + for series_id, metrics in series_metrics.items(): + mape_value = ( + f"{metrics['mape']:.2f}%" + if not np.isnan(metrics.get("mape", np.nan)) + else "N/A" + ) + series_rows += f""" + + {series_id} + {metrics["mae"]:.2f} + {metrics["rmse"]:.2f} + {mape_value} + + """ + + # Create overall metrics section + avg_mape = ( + f"{average_metrics['avg_mape']:.2f}%" + if not np.isnan(average_metrics.get("avg_mape", np.nan)) + else "N/A" + ) + + html = f""" + + + + + + Prophet Model Evaluation + + + +

Prophet Model Evaluation Results

+ +
+
+
Average MAE
+
{average_metrics["avg_mae"]:.2f}
+
Mean Absolute Error
+
+ +
+
Average RMSE
+
{average_metrics["avg_rmse"]:.2f}
+
Root Mean Square Error
+
+ +
+
Average MAPE
+
{avg_mape}
+
Mean Absolute Percentage Error
+
+
+ +

Series-Specific Metrics

+
+ + + + + + + + + + + {series_rows} + +
Series IDMAERMSEMAPE
+
+ +
+

Forecast Visualization

+ Forecast Plot +
+ + + """ + + return HTMLString(html) diff --git a/retail-forecast/steps/model_trainer.py b/retail-forecast/steps/model_trainer.py new file mode 100644 index 00000000..cd344b76 --- /dev/null +++ b/retail-forecast/steps/model_trainer.py @@ -0,0 +1,57 @@ +from typing import Dict, List +import logging + +import pandas as pd +from materializers.prophet_materializer import ProphetMaterializer +from prophet import Prophet +from typing_extensions import Annotated +from zenml import step + +logger = logging.getLogger(__name__) + + +@step(output_materializers=ProphetMaterializer) +def train_model( + train_data_dict: Dict[str, pd.DataFrame], + series_ids: List[str], + weekly_seasonality: bool = True, + yearly_seasonality: bool = False, + daily_seasonality: bool = False, + seasonality_mode: str = "multiplicative", +) -> Annotated[Dict[str, Prophet], "trained_prophet_models"]: + """Train a Prophet model for each store-item combination. + + Args: + train_data_dict: Dictionary with training data for each series + series_ids: List of series identifiers + weekly_seasonality: Whether to include weekly seasonality + yearly_seasonality: Whether to include yearly seasonality + daily_seasonality: Whether to include daily seasonality + seasonality_mode: 'additive' or 'multiplicative' + + Returns: + Dictionary of trained Prophet models for each series + """ + models = {} + + for series_id in series_ids: + logger.info(f"Training model for {series_id}...") + train_data = train_data_dict[series_id] + + # Initialize Prophet model + model = Prophet( + weekly_seasonality=weekly_seasonality, + yearly_seasonality=yearly_seasonality, + daily_seasonality=daily_seasonality, + seasonality_mode=seasonality_mode, + ) + + # Fit model + model.fit(train_data) + + # Store trained model + models[series_id] = model + + logger.info(f"Successfully trained {len(models)} Prophet models") + + return models diff --git a/retail-forecast/steps/predictor.py b/retail-forecast/steps/predictor.py new file mode 100644 index 00000000..cee1346d --- /dev/null +++ b/retail-forecast/steps/predictor.py @@ -0,0 +1,688 @@ +import base64 +from datetime import timedelta +from io import BytesIO +from typing import Any, Dict, List, Optional, Tuple +import logging + +import matplotlib.pyplot as plt +import numpy as np +import pandas as pd +from prophet import Prophet +from typing_extensions import Annotated +from zenml import log_metadata, step +from zenml.types import HTMLString + +logger = logging.getLogger(__name__) + + +@step +def make_predictions( + model: Optional[Prophet], + training_dataset: Optional[pd.DataFrame], + test_data: pd.DataFrame, + forecast_horizon: int = 14, +) -> Tuple[ + Annotated[Dict[str, Any], "forecast_data"], + Annotated[bytes, "forecast_plot"], + Annotated[Dict[str, Any], "sample_forecast"], + Annotated[int, "forecast_horizon"], + Annotated[str, "method"], + Annotated[HTMLString, "forecast_visualization"], +]: + """Generate predictions for future periods using the trained model. + + Args: + model: Trained TFT model or None for naive forecast + training_dataset: Training dataset used for the model or None for naive forecast + test_data: Test dataframe with historical data + forecast_horizon: Number of days to forecast into the future + + Returns: + forecast_data: Dictionary containing forecast data + forecast_plot: Bytes of the forecast plot image + sample_forecast: Dictionary with sample forecasts + forecast_horizon: Number of days in the forecast + method: Name of the forecasting method used + forecast_visualization: HTML visualization of forecast results + """ + # Handle case where no model or training dataset are passed (predict-only mode) + if model is None or training_dataset is None: + logger.info("Using naive forecasting method (last value)") + + # Create a naive model that predicts the last known value for each series + forecast_df = naive_forecast(test_data, forecast_horizon) + + # Create a simple plot + plt.figure(figsize=(15, 10)) + sample_series = np.random.choice( + forecast_df["series_id"].unique(), + size=min(3, len(forecast_df["series_id"].unique())), + replace=False, + ) + + for i, series_id in enumerate(sample_series): + historical = test_data[ + test_data["series_id"] == series_id + ].sort_values("date") + forecast = forecast_df[ + forecast_df["series_id"] == series_id + ].sort_values("date") + + plt.subplot(3, 1, i + 1) + plt.plot( + historical["date"], + historical["sales"], + "b-", + label="Historical", + ) + plt.plot( + forecast["date"], + forecast["sales_prediction"], + "r-", + label="Naive Forecast", + ) + plt.title(f"Series: {series_id} (Naive Forecast)") + plt.legend() + plt.grid(True) + + plt.tight_layout() + forecast_plot_buffer = BytesIO() + plt.savefig(forecast_plot_buffer, format="png") + plt.close() + forecast_plot_bytes = forecast_plot_buffer.getvalue() + + logger.info( + f"Generated naive forecasts for {len(test_data['series_id'].unique())} series, {forecast_horizon} days ahead" + ) + + # Create HTML visualization + html_visualization = create_forecast_visualization( + forecast_df, + test_data, + sample_series, + forecast_horizon, + method="naive", + ) + + # Log metadata about artifacts + log_metadata( + metadata={ + "forecast_data_artifact_name": "forecast_data", + "forecast_data_artifact_type": "Dict[str, Any]", + "visualization_artifact_name": "forecast_visualization", + "visualization_artifact_type": "zenml.types.HTMLString", + "forecast_method": "naive", + "forecast_horizon": forecast_horizon, + }, + ) + + # Get sample forecasts + sample_records = get_sample_forecasts(forecast_df) + + # Return naive forecast results + return ( + forecast_df.to_dict(), + forecast_plot_bytes, + sample_records, + forecast_horizon, + "naive", + html_visualization, + ) + + # Select device + device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + model.to(device) + + # Generate future dates for forecasting + last_date = pd.to_datetime(test_data["date"].max()) + future_dates = pd.date_range( + start=last_date + timedelta(days=1), periods=forecast_horizon, freq="D" + ) + + # Create empty list for results + forecasts = [] + + # Get unique store-item combinations + series_ids = test_data["series_id"].unique() + + # For each series (store-item combination), make a prediction + for series_id in series_ids: + # Get the series' data + series_data = test_data[test_data["series_id"] == series_id] + + # Get the most recent data for this series + max_date = series_data["date"].max() + recent_data = series_data[series_data["date"] == max_date] + + # Extract store and item + store = recent_data["store"].iloc[0] + item = recent_data["item"].iloc[0] + + # Create the future dataframe with known features + future_df = pd.DataFrame({"date": future_dates}) + future_df["store"] = store + future_df["item"] = item + future_df["series_id"] = series_id + + # Add calendar features + future_df["day_of_week"] = future_df["date"].dt.dayofweek + future_df["day_of_month"] = future_df["date"].dt.day + future_df["month"] = future_df["date"].dt.month + future_df["year"] = future_df["date"].dt.year + future_df["week_of_year"] = future_df["date"].dt.isocalendar().week + + # Simulate known features + future_df["is_weekend"] = (future_df["day_of_week"] >= 5).astype(int) + future_df["is_holiday"] = ( + (future_df["day_of_month"] == 1) + | (future_df["day_of_month"] == 15) + ).astype(int) + future_df["is_promo"] = ( + (future_df["day_of_month"] >= 10) + & (future_df["day_of_month"] <= 20) + ).astype(int) + + # Add encoded columns + for col in ["store", "item", "series_id", "day_of_week", "month"]: + # Get mapping from the test data + mapping = { + val: idx for idx, val in enumerate(test_data[col].unique()) + } + future_df[f"{col}_encoded"] = future_df[col].map(mapping) + + # Add time index + max_time_idx = test_data["time_idx"].max() + future_df["time_idx"] = range( + max_time_idx + 1, max_time_idx + 1 + len(future_df) + ) + + # Ensure all needed columns exist + for col in test_data.columns: + if col not in future_df.columns and col != "sales": + # Try to get same value as the most recent data + if col in recent_data.columns: + future_df[col] = recent_data[col].iloc[0] + else: + # Default to 0 for missing features + future_df[col] = 0 + + # Prepare dataset for prediction + future_dataset = training_dataset.from_dataset( + training_dataset, future_df, predict=True + ) + future_dataloader = future_dataset.to_dataloader( + train=False, batch_size=128 + ) + + # Generate predictions + predictions, _ = model.predict( + future_dataloader, + return_x=True, + trainer_kwargs={"accelerator": device}, + ) + + # Add predictions to the future dataframe + future_df["sales_prediction"] = predictions.flatten().cpu().numpy() + + # Append to results + forecasts.append(future_df) + + # Combine all forecasts + forecast_df = pd.concat(forecasts, ignore_index=True) + + # Create plots for visualization + plt.figure(figsize=(15, 10)) + + # Get a few random series to plot + sample_series = np.random.choice( + forecast_df["series_id"].unique(), + size=min(3, len(forecast_df["series_id"].unique())), + replace=False, + ) + + for i, series_id in enumerate(sample_series): + # Get historical data + historical = test_data[ + test_data["series_id"] == series_id + ].sort_values("date") + # Get forecast data + forecast = forecast_df[ + forecast_df["series_id"] == series_id + ].sort_values("date") + + plt.subplot(3, 1, i + 1) + # Plot historical + plt.plot( + historical["date"], historical["sales"], "b-", label="Historical" + ) + # Plot forecast + plt.plot( + forecast["date"], + forecast["sales_prediction"], + "r-", + label="Forecast", + ) + plt.title(f"Series: {series_id}") + plt.legend() + plt.grid(True) + + plt.tight_layout() + + # Capture the plot as bytes + forecast_plot_buffer = BytesIO() + plt.savefig(forecast_plot_buffer, format="png") + plt.close() + forecast_plot_bytes = forecast_plot_buffer.getvalue() + + logger.info( + f"Generated forecasts for {len(series_ids)} series, {forecast_horizon} days ahead" + ) + + # Create sample forecasts + sample_records = get_sample_forecasts(forecast_df) + + # Create HTML visualization + html_visualization = create_forecast_visualization( + forecast_df, test_data, sample_series, forecast_horizon, method="tft" + ) + + # Log metadata about artifacts + log_metadata( + metadata={ + "forecast_data_artifact_name": "forecast_data", + "forecast_data_artifact_type": "Dict[str, Any]", + "visualization_artifact_name": "forecast_visualization", + "visualization_artifact_type": "zenml.types.HTMLString", + "forecast_method": "tft", + "forecast_horizon": forecast_horizon, + }, + ) + + # Return forecasts as artifacts + return ( + forecast_df.to_dict(), + forecast_plot_bytes, + sample_records, + forecast_horizon, + "tft", + html_visualization, + ) + + +def get_sample_forecasts(forecast_df: pd.DataFrame) -> dict: + """Extract sample forecasts for each series.""" + sample_records = {} + series_ids_list = [] + dates = [] + predictions = [] + + # Group by series_id and get first record from each group + for series_id in forecast_df["series_id"].unique(): + series_data = forecast_df[forecast_df["series_id"] == series_id] + first_row = series_data.iloc[0] + series_ids_list.append(series_id) + dates.append(first_row["date"]) + predictions.append(first_row["sales_prediction"]) + + sample_records["series_id"] = series_ids_list + sample_records["date"] = dates + sample_records["sales_prediction"] = predictions + + return sample_records + + +def naive_forecast( + test_df: pd.DataFrame, forecast_horizon: int +) -> pd.DataFrame: + """Generate a naive forecast that uses the last known value for each series. + This is used as a fallback when no model is available. + """ + forecasts = [] + + # Get unique store-item combinations + series = test_df["series_id"].unique() + + # Get the last date in the test data + last_date = pd.to_datetime(test_df["date"].max()) + + # Generate future dates + future_dates = pd.date_range( + start=last_date + timedelta(days=1), periods=forecast_horizon, freq="D" + ) + + for series_id in series: + # Get the series' data + series_data = test_df[test_df["series_id"] == series_id] + + # Get the last sales value for this series + last_sales = series_data.iloc[-1]["sales"] + + # Get store and item + store = series_data.iloc[0]["store"] + item = series_data.iloc[0]["item"] + + # Create future data with the last sales value + future_df = pd.DataFrame({"date": future_dates}) + future_df["store"] = store + future_df["item"] = item + future_df["series_id"] = series_id + future_df["sales_prediction"] = last_sales + + # Add to forecasts + forecasts.append(future_df) + + # Combine all forecasts + forecast_df = pd.concat(forecasts, ignore_index=True) + return forecast_df + + +def create_forecast_visualization( + forecast_df: pd.DataFrame, + historical_df: pd.DataFrame, + sample_series: list, + forecast_horizon: int, + method: str = "tft", +) -> HTMLString: + """Create an HTML visualization of forecasting results.""" + # Create a simpler visualization with just the key information + method_name = ( + "Temporal Fusion Transformer" if method == "tft" else "Naive Forecast" + ) + + # Get forecast start and end dates + forecast_start = forecast_df["date"].min() + forecast_end = forecast_df["date"].max() + + # Calculate total forecasted sales + total_forecast = forecast_df["sales_prediction"].sum() + + # Create HTML + html = f""" + + + + + + Retail Sales Forecast + + + +
+

Retail Sales Forecast

+

Method: {method_name} | Period: {forecast_start} to {forecast_end}

+ +
+
+
Forecast Horizon
+
{forecast_horizon} days
+
+
+
Series Count
+
{len(forecast_df["series_id"].unique())}
+
+
+
Total Sales Forecast
+
{total_forecast:.0f}
+
+
+
+ + + """ + + return HTMLString(html) + + +@step +def generate_forecasts( + models: Dict[str, Prophet], + train_data_dict: Dict[str, pd.DataFrame], + series_ids: List[str], + forecast_periods: int = 30, +) -> Tuple[ + Annotated[Dict[str, pd.DataFrame], "forecasts_by_series"], + Annotated[pd.DataFrame, "combined_forecast"], + Annotated[HTMLString, "forecast_dashboard"], +]: + """Generate future forecasts using trained Prophet models. + + Args: + models: Dictionary of trained Prophet models + train_data_dict: Dictionary of training data for each series + series_ids: List of series identifiers + forecast_periods: Number of periods to forecast into the future + + Returns: + forecasts_by_series: Dictionary of forecast dataframes for each series + combined_forecast: Combined dataframe with all series forecasts + forecast_dashboard: HTML dashboard with forecast visualizations + """ + forecasts = {} + + # Create a plot to visualize all forecasts + plt.figure(figsize=(12, len(series_ids) * 4)) + + for i, series_id in enumerate(series_ids): + logger.info(f"Generating forecast for {series_id}...") + model = models[series_id] + + # Get last date from training data + last_date = train_data_dict[series_id]["ds"].max() + + # Create future dataframe + future = model.make_future_dataframe(periods=forecast_periods) + + # Generate forecast + forecast = model.predict(future) + + # Store forecast + forecasts[series_id] = forecast + + # Plot the forecast + plt.subplot(len(series_ids), 1, i + 1) + + # Plot training data + train_data = train_data_dict[series_id] + plt.plot(train_data["ds"], train_data["y"], "b.", label="Historical") + + # Plot forecast + plt.plot(forecast["ds"], forecast["yhat"], "r-", label="Forecast") + plt.fill_between( + forecast["ds"], + forecast["yhat_lower"], + forecast["yhat_upper"], + color="gray", + alpha=0.2, + ) + + # Add a vertical line at the forecast start + plt.axvline(x=last_date, color="k", linestyle="--") + + plt.title(f"Forecast for {series_id}") + plt.legend() + + # Save plot to buffer + buf = BytesIO() + plt.tight_layout() + plt.savefig(buf, format="png") + buf.seek(0) + plot_data = base64.b64encode(buf.read()).decode("utf-8") + plt.close() + + # Create a combined forecast dataframe for all series + combined_forecast = [] + for series_id, forecast in forecasts.items(): + # Add series_id column + forecast_with_id = forecast.copy() + forecast_with_id["series_id"] = series_id + + # Extract store and item from series_id + store, item = series_id.split("-") + forecast_with_id["store"] = store + forecast_with_id["item"] = item + + combined_forecast.append(forecast_with_id) + + combined_df = pd.concat(combined_forecast) + + # Log basic metadata (not the large plot) + log_metadata( + metadata={ + "forecast_horizon": forecast_periods, + "num_series": len(series_ids), + } + ) + + # Create HTML dashboard + forecast_dashboard = create_forecast_dashboard( + forecasts, series_ids, train_data_dict, plot_data, forecast_periods + ) + + logger.info( + f"Generated forecasts for {len(forecasts)} series, {forecast_periods} periods ahead" + ) + + return forecasts, combined_df, forecast_dashboard + + +def create_forecast_dashboard( + forecasts, series_ids, train_data_dict, plot_image_data, forecast_horizon +): + """Create an HTML dashboard for forecast visualization.""" + # Generate forecast metrics + series_stats = [] + for series_id in series_ids: + forecast = forecasts[series_id] + future_period = forecast.iloc[-forecast_horizon:] + + # Extract store and item + store, item = series_id.split("-") + + # Get statistics + avg_forecast = future_period["yhat"].mean() + min_forecast = future_period["yhat"].min() + max_forecast = future_period["yhat"].max() + + # Get growth rate compared to historical + historical = train_data_dict[series_id]["y"].mean() + growth = ( + ((avg_forecast / historical) - 1) * 100 if historical > 0 else 0 + ) + + series_stats.append( + { + "series_id": series_id, + "store": store, + "item": item, + "avg_forecast": avg_forecast, + "min_forecast": min_forecast, + "max_forecast": max_forecast, + "growth": growth, + } + ) + + # Create table rows for series statistics + series_rows = "" + for stat in series_stats: + growth_class = ( + "text-green-600 font-bold" + if stat["growth"] >= 0 + else "text-red-600 font-bold" + ) + growth_sign = "+" if stat["growth"] >= 0 else "" + + series_rows += f""" + + {stat["series_id"]} + {stat["store"]} + {stat["item"]} + {stat["avg_forecast"]:.1f} + {stat["min_forecast"]:.1f} + {stat["max_forecast"]:.1f} + {growth_sign}{stat["growth"]:.1f}% + + """ + + html = f""" + + + + + + Retail Sales Forecast Dashboard + + + +
+

Retail Sales Forecast Dashboard

+

Forecast horizon: {forecast_horizon} periods | Total series: {len(series_ids)}

+
+ +
+
+

Average Forecast

+
{sum([s["avg_forecast"] for s in series_stats]) / len(series_stats):.1f}
+

Average predicted sales across all series

+
+ +
+

Total Growth

+
{sum([s["growth"] for s in series_stats]) / len(series_stats):.1f}%
+

Average growth compared to historical

+
+ +
+

Top Performer

+
{max(series_stats, key=lambda x: x["growth"])["series_id"]}
+

Series with highest growth rate

+
+
+ +

Forecast by Series

+
+ + + + + + + + + + + + + + {series_rows} + +
Series IDStoreItemAvg ForecastMin ForecastMax ForecastGrowth
+
+ +
+

Forecast Visualization

+ Forecast Visualization +
+ + + """ + + return HTMLString(html) diff --git a/sign-language-detection-yolov5/README.md b/sign-language-detection-yolov5/README.md index 36844427..57cca500 100644 --- a/sign-language-detection-yolov5/README.md +++ b/sign-language-detection-yolov5/README.md @@ -273,4 +273,3 @@ The Inference pipeline is made up of the following steps: - Documentation on [Step Operators](https://docs.zenml.io/stack-components/step-operators) - More on [Step Operators](https://blog.zenml.io/step-operators-training/) - Documentation on how to create a GCP [service account](https://cloud.google.com/docs/authentication/getting-started#create-service-account-gcloud) -- ZenML CLI [documentation](https://apidocs.zenml.io/latest/cli/)