Skip to content

Big Data Projects in Spark using Locality sensitive hashing, Eigen Decomposition etc

Notifications You must be signed in to change notification settings

dibya-pati/BigData

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

Repository files navigation

BigData Using Apache Spark

BigData Assignments in Spark

  • Assignment 1:
  1. The assignments make use of map and reduce progromming model to calculate the sum of words in a paragraph and finding a set difference
  2. The above job is achieved using Apache Spark
  3. Assignment 3:Blog Analyis:This assignment computes the total number of occurences of a word pertaining to a particular industry in all the files in the blog. Intially the name of industries are captured from the 4th words of the filename
  • Assignment 2:

This assignment involves (geospatial) analysis of satellite imagery using Eigen decomposition of image feature & locality sensitive Hashing

About

Big Data Projects in Spark using Locality sensitive hashing, Eigen Decomposition etc

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages