You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While CCSparkJob uses the boto3 to read WARC/WAT/WET files, CCIndexSparkJob requires that the Spark installation includes libs/jars to access data on S3 (s3://commoncrawl/). These are usually provided when a Spark is used in a Hadoop cluster (eg. EMR, Spark on Yarn) but may not for any Spark package esp. when running Spark locally (not in a Hadoop cluster). Also add information about
Hadoop S3 FilesSystem implementations requiring to adapt the schema part of the data URI (s3:// on EMR, s3a:// when using s3a)
(reported by @calee88 in #12)
While CCSparkJob uses the boto3 to read WARC/WAT/WET files, CCIndexSparkJob requires that the Spark installation includes libs/jars to access data on S3 (s3://commoncrawl/). These are usually provided when a Spark is used in a Hadoop cluster (eg. EMR, Spark on Yarn) but may not for any Spark package esp. when running Spark locally (not in a Hadoop cluster). Also add information about
The text was updated successfully, but these errors were encountered: