This is a mysql-python project based on docker configuration, which provides a version control tool/mechanizm for database migration.
The project has been created as a part of self-learning and practicing Docker principles.
The docker-compose.yml file configures two-container cluster as follows:
- Mysql container, initialized with database MYSQL_DB and with creation of SQL log table SQL_RUN_LOG.
- Applicative python container, includes the main program running the SQL files as well as Flask server code (app.py). The Flask app allows getting the run SQL related meta-data, as well as running some actions on the database, all via the provided API end points.
This docker cluster has been written and run on Windows 10.
app/
- main.py: main program; runs SQL files on the MYSQL database
- app.py: runs the Flask app
- database.py: contains the Database class with properties and methods run by the main.py script
- Dockerfile: builds the mysql_python_app container
- requirements: contains list of Python required libraries
- templates/
- index.html: represents the HTML page shown in the Flask base URL http://localhost:5000
db/
- init.sql: initializes the MYSQL DB as mentioned above.
sqls/
- sql_run_log_0.sql: in case table is dropped, this file creating it will be the first to run by main.py
- department_tab_10.sql: example file
- employee_tab_20.sql: example file
- department_ins_30.sql: example file
docker-compose.yml: creates the docker cluster with the two containers.
The project provides dev cluster allowing the user to run SQL scripts on the (Mysql) database and develop/test/run against it in Python environment. The program operates as a version control mechanizm by registering a record with SUCCESS/FAILED status for each of the run SQL files in log table SQL_RUN_LOG.
- Although the table created already in image, the script for creating log table SQL_RUN_LOG already provided under the 'sqls/' directory: sql_run_log_0.sql Do not change the RUN_ID (0) of script, it makes it the first to be run, before any other SQL file.
- The SQL file names must comply with name convention [a-z]+[a-z0-9_]+_[0-9]+.sql, otherwise they are not recognized by the program as valid SQL files. Although table/object name is not enforced, it is recommended to create file names with the name of the object and its type. Example: employee_tab_10.sql
- The file's extention (_.sql) determines the order in which the files should be run.
- This number is referred to as the SQL RUN_ID and written as such in the SQL_RUN_LOG table.
- Recommendation: create the SQL files with a gap of 10 between consecutive files. For example, department_tab_10.sql, employee_tab_20.sql etc; this will allow you flexibility in filling unnoticed DB data/objects gaps without having to change all scripts names if some scripts were forgotten.
- The gaps between SQL RUN_IDs, though, must not exceed 10. For example, running department_tab_10.sql, employee_tab_21.sql with no SQL script with RUN_ID between 10 and 21, will issue an error.
- Two SQL files cannot have the same RUN_ID. If found, an error will be issued.
- Data consistency: volume data-volume:/var/lib/mysql is created to ensure data consistency, so even if the container is killed,
next time it is up the data previously created will be there. - Mapping SQL files path: volume ./sqls:/tmp/ is created to map the SQL files in local ./sqls directory to /tmp directory in the app container.
This means that the SQL files you put/add in ./sqls are seen transparently in container's path /tmp, which is the default path where
the program looks for SQL files to run.
(Docker-desktop installation assumed)
- Clone this repository to your directory path on your OS. Project directory docker-mysql-python will be created.

- Open a terminal window and go to local/path/to/docker-mysql-python
- run docker-compose up
You should now see the cluster via your docker-desktop, as follows:

- Docker container cluster must be up and running when running the SQL files. When running Python script main.py, the run flow is as follows:
- If table SQL_RUN_LOG does not exist, it will be created by running provided script sql_run_log_0.sql
- A record with SQL file name, its RUN_ID, status (SUCCESS or FAILURE), error (in case of FAILURE) and date are written after the run of each SQL file.
- SQL file is allowed to contain a few statements, but only one record will be written in log table SQL_RUN_LOG for it.
- The SQL files are run according to the last RUN_ID found in SQL_RUN_LOG table with status SUCCESS. example A: if scripts department_tab_10.sql, employee_tab_20.sql and department_tab_10.sql, employee_tab_20.sql passed successfully while employee_ins_30.sql failed, the next time main.py will run, the first SQL file to run with be with the smallest RUN_ID that is greater than 20.
- The main.py script will exit with error at the first FAILURE. Before exiting, a ROLLBACK will be executed for all file's statements.
Note: rollback can be applied only for DML statements.
- From container (from /app): python main.py
- From terminal: docker exec -ti mysql_python_app python main.py
- To run a specific SQL: docker exec -ti mysql_python_app python main.py <SQL file>
- To run SQLs from a directory: docker exec -ti mysql_python_app python main.py <path to SQLs directory>
Note: SQLs default path is /tmp
if running SQL/s from a different location in container, it must be specified.
For a better view of your DB tables and run status in SQL_RUN_LOG table, configure DB on one of the Mysql available UIs.
Example, MySql Workbench:

Recommended: use one of the available API platform on the internet to view/act on the available flask end-points as specified in the Flask base URL.
Example: Postman
You can view the available end-points on the base Flask URL, as follows:

- to get the data of the failed SQL files enter the "failed" end-point.
You may want to run all SQL files from the scratch, and therefore drop all your app tables along with truncating the SQL_RUN_LOG table.
You can achieve that by either:
- Manually do that in the database itself (via the Mysql client or directly in the database)
- Use the "drop_tables" API. In this case you must have file /sqls/drop_tables.sql (already provided with example tables) with the
relevant "drop table" statements (the "truncate" statement for the SQL_RUN_LOG already provided in file).- In case a table in one or more of the "drop table" statements does not exist, the program will not fail but rather continue to next statement.
- To use the API you must set a header key first. In this project the value hasn't been encrypted.
set the key as follows under "Headers" configuration: key-name: Api-Key-Test, value: ayelet
If not confgured you'll get a "Not Authorized" response.







