Skip to content

isabella232/cortex-ml-models

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ML Models

ML Models for Cortex

Variables

This table describe the required variablas and their uses

Name Description Mandatory Default Value
project_id_src Source Google Cloud Project:
Project where the source data is located which the data models will consume.
Y N/A
project_id_tgt Target Google Cloud Project:
Project where Data Foundation for SAP predefined data models will be deployed and accessed by end-users.
This may or may not be different from the source project.
Y N/A
dataset_raw_landing Source BigQuery Dataset:
BigQuery dataset where the source SAP data is replicated to or where the test data will be created.
Y N/A
dataset_cdc_processed CDC BigQuery Dataset:
BigQuery dataset where the CDC processed data lands the latest available records.
This may or may not be the same as the source dataset.
Y N/A
dataset_reporting_tgt Target BigQuery reporting dataset:
BigQuery dataset where the Data Foundation for SAP predefined data models will be deployed.
N SAP_REPORTING
dataset_models_tgt Target BigQuery reporting dataset:
BigQuery dataset where the Data Foundation for SAP predefined data models will be deployed.
N SAP_ML_MODELS
mandt SAP Mandant. Must be 3 character. Y 800
sql_flavour Which database target type.
Valid values are ECC or S4
N ECC

Simple local output

If you want to test the output of the jinja template locally you can use jinja-cli for a quick check:

  1. First install jinja-cli:
pip install jinja-cli
  1. Then create a json file with the required input data:
cat  <<EOF > data.json
  "project_id_src": "your-source-project",
  "project_id_tgt": "your-target-project",
  "dataset_raw_landing": "your-raw-dataset",
  "dataset_cdc_processed": "your-cdc-processed-dataset",
  "dataset_reporting_tgt": "your-reporting-target-dataset-OR-SAP_REPORTING",
  "dataset_models_tgt": "your-mlmodels-target-dataset-OR-ML_MODELS",
  "mandt": "your-mandt-number-800",
  "sql_flavour": "ECC"
}
EOF

Here is what an example looks like

{
  "project_id_src": "kittycorn-dev",
  "project_id_tgt": "kittycorn-dev",
  "dataset_raw_landing": "ECC_REPL",
  "dataset_cdc_processed": "CDC_PROCESSED",
  "dataset_reporting_tgt": "SAP_REPORTING",
  "dataset_models_tgt": "ML_MODELS",
  "mandt": "800",
  "sql_flavour": "ECC"
}
  1. Create an output folder
mkdir output
  1. Now generate the parsed file:
jinja -d data.json -o ouput/filename.sql filename.sql

Alternatively, if you want to generate all files:

for f in *.sql; do
    echo "processing $f ..."
    jinja -d data.json -o "output/${f}" "${f}"
done

Contributing

All submitions are welcome. Please read our code of conduct and contributions guidelines.

About

No description, website, or topics provided.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published