Skip to content

7. Install Kaspacoresystem (Docker version)

Fadhil Yori Hibatullah edited this page Apr 20, 2021 · 1 revision

Back

  1. Make sure you have already installed Docker at the Defense Center environment, see https://docs.docker.com/install/ for Docker installation tutorial.

  2. Edit docker-compose.yml file, change with your host IP, and then save it.

  3. Pull the image

    $ docker-compose pull
    
  4. Clone the KaspaCoreSystem repository, then navigate the terminal to that directory.

    $ git clone https://github.com/mata-elang-pens/KaspaCoreSystem.git && cd KaspaCoreSystem
    
  5. Change the value in src/main/resources/application.conf to match your environment.

  6. Also change the MongoDB IP from 127.0.0.1 to your IP in src/main/scala/me/mamotis/kaspacore/jobs/DataStream.scala.

  7. Run these command from KaspaCoreSystem directory:

    $ sbt assembly
    
  8. Copy target/scala-2.11/KaspaCore-assembly-0.1.jar file to the defense center and place it in this directory with docker-compose.yml and other files.

  9. Start Docker services in daemon mode using these command:

    $ docker-compose up -d
    
  10. Make sure that all service running

$ docker-compose ps
  1. Open the web browser, and navigate to http://your-server-ip:8080, check if there is still running app, if not, try restarting the spark-submit service and then check again :
$ docker-compose start spark-submit
  1. Next, we need to set up the scheduled batch job. First, create a directory to place the required files for the batch job.
$ sudo mkdir -p /etc/mataelang-spark
  1. Create a new file called spark.env
$ sudo nano /etc/mataelang-spark/spark.env

add the following lines to the spark.env file (just change the SPARK_MASTER_HOST with your server IP)

SPARK_MASTER_HOST: yourip
SPARK_MASTER_PORT: 7077
SPARK_TOTAL_EXECUTOR_CORES: 1
SPARK_CONF_FILE_PATH: /opt/spark.conf
SPARK_SUBMIT_JAR: file:///opt/KaspaCore-assembly-0.1.jar

Next, create a new file called spark.conf

$ sudo nano /etc/mataelang-spark/spark.conf

and then add the following lines to the spark.conf file

spark.submit.deployMode=client
spark.executor.cores=1
spark.executor.memory=2g
  1. Copy application file (KaspaCore-assembly-0.1.jar) to the /etc/mataelang-spark/
$ sudo cp /path/to/KaspaCore-assembly-0.1.jar /etc/mataelang-spark/
  1. Add cron job for the KaspaCoreSystem batch job. Run the following command to open the crontab file:
$ sudo crontab -e

After the text editor opened, add the following line :

0 0 * * * docker run --rm --name spark-submit-daily --network host -v /etc/localtime:/etc/localtime -v /etc/timezone:/etc/timezone -v /etc/mataelang-spark/spark.conf:/opt/spark.conf -v /etc/mataelang-spark/KaspaCore-assembly-0.1.jar:/opt/KaspaCore-assembly-0.1.jar --env-file /etc/mataelang-spark/spark.env -e SPARK_SUBMIT_CLASS=me.mamotis.kaspacore.jobs.DailyCount mfscy/me-spark-submit:latest
0 0 1 * * docker run --rm --name spark-submit-monthly --network host -v /etc/localtime:/etc/localtime -v /etc/timezone:/etc/timezone -v /etc/mataelang-spark/spark.conf:/opt/spark.conf -v /etc/mataelang-spark/KaspaCore-assembly-0.1.jar:/opt/KaspaCore-assembly-0.1.jar --env-file /etc/mataelang-spark/spark.env -e SPARK_SUBMIT_CLASS=me.mamotis.kaspacore.jobs.MonthlyCount mfscy/me-spark-submit:latest
0 0 1 1 * docker run --rm --name spark-submit-yearly --network host -v /etc/localtime:/etc/localtime -v /etc/timezone:/etc/timezone -v /etc/mataelang-spark/spark.conf:/opt/spark.conf -v /etc/mataelang-spark/KaspaCore-assembly-0.1.jar:/opt/KaspaCore-assembly-0.1.jar --env-file /etc/mataelang-spark/spark.env -e SPARK_SUBMIT_CLASS=me.mamotis.kaspacore.jobs.AnnuallyCount mfscy/me-spark-submit:latest

Back