Skip to content

ARTH-TASKS/hadoop_master-slave_model

Repository files navigation

My own super computer: A model of master-salve topology.

*As a matter of fact, Facebook handles 105 petabytes of data every hour, which is roughly 100 million gigabytes. *So, how this huge bulk of data is getting managed. Is it because there is massive hard disk available to store data? *The above problem is known as #BIGDATA. *So, bigdata is not a technology. It’s a problem we have to overcome with technology.

##The solution is DISTRIBUTED STORAGE: DISCOVERY OF SUPER COMPUTERS.##

About

configuration of hadoop distributive file system

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published