Big Data 🐘 & Cloud Data ☁️ Sync Tool : cloud data sync tool will be very useful during the migration process to move all the partitioned/non-partitioned HDFS data to cloud buckets ☁️.
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
doc/blob
modules
test
.gitignore
LICENSE
README.md
runDataSync.sh
sample_AWS_command.sh
sample_GCP_command.sh

README.md

article
cloud-datasync

Big Data & Cloud Data Sync Tool

License

Summary

Cloud data sync tool will be very useful during the migration process to move all the partitioned/non-partitioned HDFS data to Cloud buckets.

Features and Limitations

The cloud-datasync tool currently supports to incrementally copy the partitioned and bulk copy the non-partitioned data to GCP/AWS cloud data storage from any local linux/mac machines. These are new features I have planned for the tool,

  • Using β€˜distcp’ command effectively copy the data from local HDFS cluster to any cloud service
  • Automatically generate Azkaban job flow and schedule the data copy jobs in Azkaban servers. We can use Azkaban Python Package to implement this feature.

If you want to share any new features/issues, feel free to open an issue in the GitHub repository.

Directory Layout

.
β”œβ”€β”€ LICENSE
β”œβ”€β”€ README.md
β”œβ”€β”€ doc
β”‚Β Β  └── blob
β”‚Β Β      └── cloud-data-sync.png
β”œβ”€β”€ modules                                             --> module folder
β”‚Β Β  β”œβ”€β”€ authentication.sh
β”‚Β Β  β”œβ”€β”€ conf                                            --> configurations for data sync tool
β”‚Β Β  β”‚Β Β  └── aws-conf.properties                         --> sample access and secret keys for AWS 
β”‚Β Β  β”œβ”€β”€ datePatternValidation.sh
β”‚Β Β  β”œβ”€β”€ dateUtilsLinux.sh                               --> date utils for linux machine 
β”‚Β Β  β”œβ”€β”€ dateUtilsMac.sh                                 --> date utils for mac machine 
β”‚Β Β  β”œβ”€β”€ processBulk.sh                                  --> to upload bulk data
β”‚Β Β  └── processPartitioned.sh                           --> to upload incrementally the partitioned data
β”œβ”€β”€ runDataSync.sh                                      --> main script that uses the module
β”œβ”€β”€ sample_AWS_command.sh                               --> sample commands to copy the data to AWS S3 bucket 
β”œβ”€β”€ sample_GCP_command.sh                               --> sample commands to copy the data to GCP storage 
└── test                                                --> test scripts for each modules 
    β”œβ”€β”€ authenticationTest.sh
    β”œβ”€β”€ datePatternValidationTest.sh
    β”œβ”€β”€ dateUtilsLinuxTest.sh
    └── dateUtilsMacTest.sh

License

MIT Β© Renien