Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jvm profiler parameter ignored #63

Closed
sashasami03 opened this issue Dec 8, 2019 · 1 comment
Closed

jvm profiler parameter ignored #63

sashasami03 opened this issue Dec 8, 2019 · 1 comment

Comments

@sashasami03
Copy link

I am trying to use uber jvm profiler to profile my spark application (spark 2.4, running on emr 5.21)

Following is my cluster configuration

          [
             {
                "classification": "spark-defaults",
                "properties": {
                   "spark.executor.memory": "38300M",
                   "spark.driver.memory": "38300M",
                   "spark.yarn.scheduler.reporterThread.maxFailures": "5",
                   "spark.driver.cores": "5",
                   "spark.yarn.driver.memoryOverhead": "4255M",
                   "spark.executor.heartbeatInterval": "60s",
                   "spark.rdd.compress": "true",
                   "spark.network.timeout": "800s",
                   "spark.executor.cores": "5",
                   "spark.memory.storageFraction": "0.27",
                   "spark.speculation": "true",
                   "spark.sql.shuffle.partitions": "200",
                   "spark.shuffle.spill.compress": "true",
                   "spark.shuffle.compress": "true",
                   "spark.storage.level": "MEMORY_AND_DISK_SER",
                   "spark.default.parallelism": "200",
                   "spark.serializer": "org.apache.spark.serializer.KryoSerializer",
                   "spark.memory.fraction": "0.80",
                   "spark.executor.extraJavaOptions": "-XX:+UseG1GC   -XX:InitiatingHeapOccupancyPercent=35 -XX:OnOutOfMemoryError='kill -9 %p'",
                   "spark.executor.instances": "107",
                   "spark.yarn.executor.memoryOverhead": "4255M",
                   "spark.dynamicAllocation.enabled": "false",
                   "spark.driver.extraJavaOptions": "-XX:+UseG1GC  -XX:InitiatingHeapOccupancyPercent=35 -XX:OnOutOfMemoryError='kill -9 %p'"
                   },
                "configurations": []
            },
            {
                "classification": "yarn-site",
                "properties": {
                   "yarn.log-aggregation-enable": "true",
                   "yarn.nodemanager.pmem-check-enabled": "false",
                   "yarn.nodemanager.vmem-check-enabled": "false"
                },
                "configurations": []
            },
            {
                "classification": "spark",
                "properties": {
                   "maximizeResourceAllocation": "true",
                   "spark.sql.broadcastTimeout": "-1"
                 },
                 "configurations": []
            },
            {
                 "classification": "emrfs-site",
                 "properties": {
                     "fs.s3.threadpool.size": "50",
                     "fs.s3.maxConnections": "5000"
                  },
                  "configurations": []
            },
            {
                  "classification": "core-site",
                  "properties": {
                     "fs.s3.threadpool.size": "50",
                     "fs.s3.maxConnections": "5000"
                   },
                   "configurations": []
             }

    ]

The profiler jar is stored in s3 (mybucket/profilers/jvm-profiler-1.0.0.jar). While bootstrapping my core and master nodes, I run the following bootstrap script

     sudo mkdir -p /tmp
     aws s3 cp s3://mybucket/profilers/jvm-profiler-1.0.0.jar /tmp/

I submit my emr step as follows

       spark-submit --deploy-mode cluster --master=yarn ......(other parameters).........
       --conf spark.jars=/tmp/jvm-profiler-1.0.0.jar --conf spark.driver.extraJavaOptions=-javaagent:jvm-profiler-1.0.0.jar=reporter=com.uber.profiling.reporters.ConsoleOutputReporter,metricInterval=5000 --conf spark.executor.extraJavaOptions=-javaagent:jvm-profiler-1.0.0.jar=reporter=com.uber.profiling.reporters.ConsoleOutputReporter,metricInterval=5000

But I am unable to see the profiling related output in the logs (checked both stdout and stderr logs for all containers). Is the parameter ignored ? Am I missing something ? Is there something else I could check to see why this parameter is being ignored ?

@sashasami03
Copy link
Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant