it wourld waste lot of ROM space when executing a spark task if a resource jar size is very large. because every time spark will copy the dependent jar file to execute instance dictonary.
当spark执行时候依赖了很大的jar包的时候会导致服务器空间的浪费因为spark每个实例任务会在 worker机器上创建一个目录并且把依赖文件复制过去。
So create a shared dictonary on HDFS and link to spark call would be a good way. or try other better way.