can not submit to aurora, it seems halt up. #3502
Comments
@dttlgotv
work ? Please refer to this in my aurora file.
|
About cmd: hdfs dfs -get /heron/dist/heron-core.tar.gz heron-core.tar.gz && tar zxf heron-core.tar.gz I tried this command on three cluster machine, it works well. My heron.aurora is below: heron_core_release_uri = '{{CORE_PACKAGE_URI}}' --- processes ---fetch_heron_system = Process( fetch_user_package = Process( result: mesos stderr: |
I used your reference: #heron_core_release_uri = '{{CORE_PACKAGE_URI}}' --- processes ---fetch_heron_system = Process( result: mesos and aurora stderr can not be seen , perhaps the task is not be scheduled. But I cmd this error can be seen below: Error loading configuration: name 'textwrap' is not defined [2020-03-29 11:15:47 +0000] [ERROR]: Failed to launch topology 'Test3Topology' |
I just give two results (your reference and mine), please check my comment .Thanks a lot |
or Remove
|
I missed this line. aurora error:
mesos error: |
some my config files: upload.yaml file: heron.uploader.hdfs.config.directory: "/usr/local/hadoop/etc/hadoop" # heron.uploader.hdfs.topologies.directory.uri: hdfs://heron/topologies/${CLUSTER}heron.uploader.hdfs.topologies.directory.uri: "/heron/topologies/${CLUSTER}" client.yaml: location of the core packageheron.package.core.uri: "/heron/dist/heron-core.tar.gz" Whether role/env is required to submit a topology. Default value is False.heron.config.is.role.required: True |
to
|
Sorry, I can not understand what is your meanings. |
your meanings is that using hdfs to get ..... ? |
The last modification. My heron.aurora file is below, but error can be seen yet. heron_core_release_uri = '{{CORE_PACKAGE_URI}}' --- processes ---fetch_heron_system = Process( fetch_user_package = Process( Aurora error: mesos error: |
When the mesos-agent server executor works
|
Download hdfs: //...../heron-cento.tgz packge from hdfs to the container.
|
please give me more information..i can not get your idea
发自我的iPhone
…------------------ Original ------------------
From: Roger Pack <notifications@github.com>
Date: Sun,Mar 29,2020 3:22 PM
To: apache/incubator-heron <incubator-heron@noreply.github.com>
Cc: dttlgotv <tantan.gxh@foxmail.com>, Mention <mention@noreply.github.com>
Subject: Re: [apache/incubator-heron] can not submit to aurora, it seems halt up. (#3502)
Download hdfs: //...../heron-cento.tgz packge from hdfs to the container.
heron_core_release_uri = '{{CORE_PACKAGE_URI}}'
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Make the hdfs command work on all master/agent (slave). If you are using CentOS, CDH.
Heron should pre-set your environment and set it to custom settings. |
this work has done well,i has tried to run the command on cmd line in slave machine. all works
you can check my comments in three floor
发自我的iPhone
…------------------ Original ------------------
From: choi se <notifications@github.com>
Date: Sun,Mar 29,2020 4:38 PM
To: apache/incubator-heron <incubator-heron@noreply.github.com>
Cc: dttlgotv <tantan.gxh@foxmail.com>, Mention <mention@noreply.github.com>
Subject: Re: [apache/incubator-heron] can not submit to aurora, it seems halt up. (#3502)
Make the hdfs command work on all master/agent (slave).
If you are using CentOS, CDH.
You need to create an environment so that hdfs can work in advance on your agent (slave).
` yum install hadoop-client hadoop-hdfs `
Heron should pre-set your environment and set it to custom settings.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
this ? |
Let's see with Zoom.us ? |
let me download zoom, then I contact you.. thanks a lot |
Check Slak DM!! |
https://zoom.com.cn/j/604255874 can you use this link, I can not found your name... |
@dttlgotv Check Slak DM!! |
Take a look at Slack's DM. |
DEBUG] Using auth module: <apache.aurora.common.auth.auth_module.InsecureAuthModule object at 0x7f875d640510>
INFO] Creating job Test3Topology
DEBUG] Full configuration: JobConfiguration(instanceCount=2, cronSchedule=None, cronCollisionPolicy=0, key=JobKey(environment=u'devel', role=u'gxh', name=u'Test3Topology'), taskConfig=TaskConfig(isService=True, contactEmail=None, taskLinks={}, tier=u'preemptible', mesosFetcherUris=None, executorConfig=ExecutorConfig(data='{"environment": "devel", "health_check_config": {"health_checker": {"http": {"expected_response_code": 200, "endpoint": "/health", "expected_response": "ok"}}, "min_consecutive_successes": 1, "initial_interval_secs": 30.0, "max_consecutive_failures": 2, "timeout_secs": 5.0, "interval_secs": 10.0}, "name": "Test3Topology", "service": true, "max_task_failures": 1, "cron_collision_policy": "KILL_EXISTING", "enable_hooks": false, "cluster": "aurora", "task": {"processes": [{"daemon": false, "name": "fetch_heron_system", "ephemeral": false, "max_failures": 1, "min_duration": 5, "cmdline": "hdfs dfs -get /heron/dist/heron-core.tar.gz heron-core.tar.gz && tar zxf heron-core.tar.gz", "final": false}, {"daemon": false, "name": "fetch_user_package", "ephemeral": false, "max_failures": 1, "min_duration": 5, "cmdline": "hdfs dfs -get /heron/topologies/aurora/Test3Topology-gxh-tag-0-922516477660846776.tar.gz topology.tar.gz && tar zxf topology.tar.gz", "final": false}, {"daemon": false, "name": "launch_heron_executor", "ephemeral": false, "max_failures": 1, "min_duration": 5, "cmdline": "./heron-core/bin/heron-executor --shard={{mesos.instance}} --master-port={{thermos.ports[port1]}} --tmaster-controller-port={{thermos.ports[port2]}} --tmaster-stats-port={{thermos.ports[port3]}} --shell-port={{thermos.ports[http]}} --metrics-manager-port={{thermos.ports[port4]}} --scheduler-port={{thermos.ports[scheduler]}} --metricscache-manager-master-port={{thermos.ports[metricscachemgr_masterport]}} --metricscache-manager-stats-port={{thermos.ports[metricscachemgr_statsport]}} --checkpoint-manager-port={{thermos.ports[ckptmgr_port]}} --topology-name=Test3Topology --topology-id=Test3Topology3dd4ac0f-b248-4dd9-a91d-6bc53dafb8c2 --topology-defn-file=Test3Topology.defn --state-manager-connection=127.0.0.1:2181 --state-manager-root=/heron --state-manager-config-file=./heron-conf/statemgr.yaml --tmaster-binary=./heron-core/bin/heron-tmaster --stmgr-binary=./heron-core/bin/heron-stmgr --metrics-manager-classpath=./heron-core/lib/metricsmgr/* --instance-jvm-opts=\"\" --classpath=heron-streamlet-examples.jar --heron-internals-config-file=./heron-conf/heron_internals.yaml --override-config-file=./heron-conf/override.yaml --component-ram-map=random-sentences-source:209715200 --component-jvm-opts=\"\" --pkg-type=jar --topology-binary-file=heron-streamlet-examples.jar --heron-java-home=/usr/lib/jvm/java-1.8.0-openjdk-amd64 --heron-shell-binary=./heron-core/bin/heron-shell --cluster=aurora --role=gxh --environment=devel --instance-classpath=./heron-core/lib/instance/* --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml --scheduler-classpath=./heron-core/lib/scheduler/:./heron-core/lib/packing/:./heron-core/lib/statemgr/* --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/:./heron-core/lib/statefulstorage/: --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824 --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/*", "final": false}, {"daemon": false, "name": "discover_profiler_port", "ephemeral": false, "max_failures": 1, "min_duration": 5, "cmdline": "echo {{thermos.ports[yourkit]}} > yourkit.port", "final": false}], "name": "setup_and_run", "finalization_wait": 30, "max_failures": 1, "max_concurrency": 0, "resources": {"gpu": 0, "disk": 13958643712, "ram": 2357198848, "cpu": 1.0}, "constraints": [{"order": ["fetch_heron_system", "fetch_user_package", "launch_heron_executor", "discover_profiler_port"]}]}, "production": false, "role": "gxh", "tier": "preemptible", "announce": {"primary_port": "http", "portmap": {"health": "http"}}, "lifecycle": {"http": {"graceful_shutdown_endpoint": "/quitquitquit", "port": "health", "shutdown_endpoint": "/abortabortabort"}}, "priority": 0}', name='AuroraExecutor'), requestedPorts=set([u'port4', u'http', u'metricscachemgr_masterport', u'yourkit', u'metricscachemgr_statsport', u'scheduler', u'ckptmgr_port', u'port2', u'port3', u'port1']), maxTaskFailures=1, priority=0, ramMb=2248, job=JobKey(environment=u'devel', role=u'gxh', name=u'Test3Topology'), production=False, diskMb=13312, resources=frozenset([]), owner=Identity(user='root'), container=Container(docker=None, mesos=MesosContainer(image=None, volumes=None)), metadata=frozenset([]), numCpus=1.0, constraints=set([])), owner=Identity(user='root'))
DEBUG] Querying instance statuses: None
DEBUG] Response from scheduler: OK (message: )
DEBUG] Querying instance statuses: None
DEBUG] Response from scheduler: OK (message: )
DEBUG] Querying instance statuses: None
DEBUG] Response from scheduler: OK (message: )
DEBUG] Querying instance statuses: None
DEBUG] Response from scheduler: OK (message: )
DEBUG] Querying instance statuses: None
DEBUG] Response from scheduler: OK (message: )
DEBUG] Querying instance statuses: None
DEBUG] Response from scheduler: OK (message: )
DEBUG] Querying instance statuses: None
DEBUG] Response from scheduler: OK (message: )
The text was updated successfully, but these errors were encountered: