Skip to content
This repository has been archived by the owner on Dec 13, 2023. It is now read-only.

Make orkes queue default #3435

Merged
merged 9 commits into from
Mar 14, 2023
Merged

Make orkes queue default #3435

merged 9 commits into from
Mar 14, 2023

Conversation

manan164
Copy link
Contributor

Deprecate dyno queue.

  • Refactoring (no functional changes, no api changes)

@@ -16,6 +16,7 @@ springdoc.api-docs.path=/api-docs
loadSample=true

conductor.db.type=memory
conductor.queue.type=redis_standalone
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this add redis standalone dependency to starting Conductor?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes @aravindanr , this will add the dependency on redis standalone

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@manan164 is there a chance you could update this doc to mention we need to run redis standalone before we start server locally?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if i didn't define this config conductor.queue.type, will I still use the orkes_queue?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No @lijia-rengage , We need to mention the queue type otherwise it will use dynomite.

@missedone
Copy link

missedone commented Mar 16, 2023

@manan164 , i got the following error when starting conductor on the last main branch, how can I configure MeterRegistry?

[com/netflix/conductor/core/config/ConductorCoreConfiguration.class]: Unsatisfied dependency expressed through method 'getEventQueueProviders' parameter 0; nested exception is 
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'conductorEventQueueProvider' defined in URL [jar:file:/Users/xxx/workspace/platform/conductor/core/build/libs/conductor-core-3.14.0-SNAPSHOT.jar!/com/netflix/conductor/core/events/queue/ConductorEventQueueProvider.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is 
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'getQueueDAOStandalone' defined in class path resource [io/orkes/conductor/queue/config/RedisQueueConfiguration.class]: Unsatisfied dependency expressed through method 'getQueueDAOStandalone' parameter 1; nested exception is 
org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'io.micrometer.core.instrument.MeterRegistry' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {}

the relevant snippet of application.properties:

conductor.queue.type=redis_standalone
conductor.redis.hosts=redis.xxx.amazonaws.com:6379:us-west-2b

@manan164
Copy link
Contributor Author

Hi @missedone , Thanks for reporting. Can you please share the properties used other than those mentioned above? I tried but could not be able to reproduce it.

@missedone
Copy link

# Servers.
conductor.grpc-server.enabled=false

# Database persistence type.
conductor.db.type=postgres
# Cache expiry for the task definitions in seconds
#conductor.postgres.taskDefCacheRefreshInterval=60

spring.datasource.url=jdbc:postgresql://aaa.rds.amazonaws.com:5432/db?currentSchema=public&ApplicationName=conductor
spring.datasource.username=xxx
spring.datasource.password=yyy

# Hikari pool sizes are -1 by default and prevent startup
spring.datasource.hikari.maximum-pool-size=10
spring.datasource.hikari.minimum-idle=2

# Elastic search instance indexing is enabled.
conductor.indexing.enabled=true

# Transport address to elasticsearch
conductor.elasticsearch.url=https://es.xxxx

# Name of the elasticsearch cluster
conductor.elasticsearch.indexName=conductor

conductor.queue.type=redis_standalone
#Redis configuration details.
#format is host:port:rack separated by semicolon
#Auth is supported. Password is taken from host[0]. format: host:port:rack:password
conductor.redis.hosts=redis.xxx.amazonaws.com:6379:us-west-2b

# Load sample kitchen sink workflow
loadSample=false

@missedone
Copy link

i tried with the following properties as well, but didn't work either:

management.endpoint.metrics.enabled=true
management.endpoints.web.exposure.include=*
management.metrics.export.simple.enabled=true

@manan164
Copy link
Contributor Author

Hi @missedone , I tried with above properties but looks like server works fine.
I see you are using postgres and queue so I have added implementation "com.netflix.conductor:conductor-postgres-persistence:3.13.3" in server/build.gradle. Rest all things kept same. Can you please join community? We can debug together.

@missedone
Copy link

thanks @manan164 for the help, the issue might be caused by the dirty cache in IntelliJ.
it works once I updated to springboot 2.7.9, but don't think the version matters, as it works still if I revert the springboot version to 2.7.3.

@manan164
Copy link
Contributor Author

Thanks @missedone , glad the issue is solved.

@lijia-rengage
Copy link

lijia-rengage commented Apr 25, 2023

I got this error when I deployed dynomite and conductor-server on Kubernetes.
MDRT1Zm7jO
This is my config:

    # Servers.
    conductor.grpc-server.enabled=false

    # Database persistence type.
    conductor.db.type=dynomite

    # Dynomite Cluster details.
    # format is host:port:rack separated by semicolon
    conductor.redis.hosts=dynomite.conductor.svc.cluster.local:8102:us-east-1b

    # Dynomite cluster name
    conductor.redis.clusterName=dynomite.conductor.svc.cluster.local

    # Namespace for the keys stored in Dynomite/Redis
    conductor.redis.workflowNamespacePrefix=conductor

    # Namespace prefix for the dyno queues
    conductor.redis.queueNamespacePrefix=conductor_queues

    # No. of threads allocated to dyno-queues (optional)
    queues.dynomite.threads=10

    # By default with dynomite, we want the repairservice enabled
    conductor.app.workflowRepairServiceEnabled=true

    # Non-quorum port used to connect to local redis.  Used by dyno-queues.
    # When using redis directly, set this to the same port as redis server
    # For Dynomite, this is 22122 by default or the local redis-server port used by Dynomite.
    conductor.redis.queuesNonQuorumPort=22122

    # Elastic search instance indexing is enabled.
    conductor.indexing.enabled=true

    # Transport address to elasticsearch
    conductor.elasticsearch.url=http://elasticsearch.conductor.svc.cluster.local:9200

    # Name of the elasticsearch cluster
    conductor.elasticsearch.indexName=conductor
    #conductor.event-queues.amqp.queueType=classic
    #conductor.event-queues.amqp.sequentialMsgProcessing=true

    # Additional modules for metrics collection exposed via logger (optional)
    # conductor.metrics-logger.enabled=true
    # conductor.metrics-logger.reportPeriodSeconds=15

    # Additional modules for metrics collection exposed to Prometheus (optional)
    # conductor.metrics-prometheus.enabled=true
    # management.endpoints.web.exposure.include=prometheus

    # To enable Workflow/Task Summary Input/Output JSON Serialization, use the following:
    # conductor.app.summary-input-output-json-serialization.enabled=true

    # Load sample kitchen sink workflow
    loadSample=true

    conductor.elasticsearch.clusterHealthColor=yellow

Could you plz give my some insights?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants