Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Problem registering governance engine / Asset consumer exceptions #6549

Closed
1 task done
planetf1 opened this issue May 26, 2022 · 23 comments · Fixed by #6712
Closed
1 task done

[BUG] Problem registering governance engine / Asset consumer exceptions #6549

planetf1 opened this issue May 26, 2022 · 23 comments · Fixed by #6712
Assignees
Labels
bug Something isn't working triage New bug/issue which needs checking & assigning

Comments

@planetf1
Copy link
Member

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Noticed this when checking the notebooks for release.

This notebook is explicitly marked as not to be run. However it looks to be an issue worth checking, possibly after some refactoring

 
POST https://lab-core:9443/servers/cocoMDS2/open-metadata/access-services/governance-engine/users/erinoverview/governance-services/new/GovernanceActionService
{
    "class": "NewGovernanceServiceRequestBody",
    "qualifiedName": "ftp-governance-action-service",
    "displayName": "FTP Governance Action Service",
    "description": "Simulates FTP from an external party.",
    "connection": {
        "class": "Connection",
        "type": {
            "class": "ElementType",
            "elementTypeId": "114e9f8f-5ff3-4c32-bd37-a7eb42712253",
            "elementTypeName": "Connection",
            "elementTypeVersion": 1,
            "elementTypeDescription": "A set of properties to identify and configure a connector instance.",
            "elementOrigin": "CONFIGURATION"
        },
        "qualifiedName": "ftp-governance-action-service-implementation",
        "displayName": "FTP Governance Action Service Implementation Connection",
        "description": "Connection for governance service ftp-governance-action-service",
        "connectorType": {
            "class": "ConnectorType",
            "type": {
                "class": "ElementType",
                "elementTypeId": "954421eb-33a6-462d-a8ca-b5709a1bd0d4",
                "elementTypeName": "ConnectorType",
                "elementTypeVersion": 1,
                "elementTypeDescription": "A set of properties describing a type of connector.",
                "elementOrigin": "LOCAL_COHORT"
            },
            "guid": "1111f73d-e343-abcd-82cb-3918fed81da6",
            "qualifiedName": "ftp-governance-action-service-GovernanceServiceProvider",
            "displayName": "FTP Governance Action Service Governance Service Provider Implementation",
            "description": "Simulates FTP from an external party.",
            "connectorProviderClassName": "org.odpi.openmetadata.adapters.connectors.governanceactions.provisioning.MoveCopyFileGovernanceActionProvider"
        },
        "configurationProperties": {
            "noLineage": ""
        }
    }
}
 
Returns:
{
    "class": "GUIDResponse",
    "relatedHTTPCode": 200,
    "guid": "d594dc4d-59f6-44c8-8121-0dafdc5be544"
}
 
 
The guid for the ftp-governance-action-service governance service is: d594dc4d-59f6-44c8-8121-0dafdc5be544
 
POST https://lab-datalake:9443/servers/cocoMDS1/open-metadata/access-services/governance-engine/users/peterprofile/governance-engines/f45b9bb7-81d7-4304-a74f-b8162a3438e9/governance-services
{
    "class": "GovernanceServiceRegistrationRequestBody",
    "governanceServiceGUID": "d594dc4d-59f6-44c8-8121-0dafdc5be544",
    "requestType": "copy-file"
}
 
Returns:
{
    "class": "VoidResponse",
    "relatedHTTPCode": 500,
    "exceptionClassName": "org.odpi.openmetadata.frameworks.connectors.ffdc.PropertyServerException",
    "actionDescription": "registerGovernanceServiceWithEngine",
    "exceptionErrorMessage": "OMAG-REPOSITORY-HANDLER-500-001 An unexpected error org.odpi.openmetadata.repositoryservices.ffdc.exception.RepositoryErrorException was returned to reclassifyEntity(LatestChange) by the metadata server during registerGovernanceServiceWithEngine request for open metadata access service Governance Engine OMAS on server cocoMDS1; message was OMRS-REPOSITORY-400-056 The OMRS repository connector operation updateEntityClassification (EntityProxy) from the OMRS Enterprise Repository Services can not locate the home repository connector for instance LatestChange located in metadata collection 94645157-2a9d-4d32-9170-e6f2747a0ceb",
    "exceptionErrorMessageId": "OMAG-REPOSITORY-HANDLER-500-001",
    "exceptionErrorMessageParameters": [
        "OMRS-REPOSITORY-400-056 The OMRS repository connector operation updateEntityClassification (EntityProxy) from the OMRS Enterprise Repository Services can not locate the home repository connector for instance LatestChange located in metadata collection 94645157-2a9d-4d32-9170-e6f2747a0ceb",
        "registerGovernanceServiceWithEngine",
        "Governance Engine OMAS",
        "cocoMDS1",
        "org.odpi.openmetadata.repositoryservices.ffdc.exception.RepositoryErrorException",
        "reclassifyEntity(LatestChange)"
    ],
    "exceptionSystemAction": "The system is unable to process the request because of an internal error.",
    "exceptionUserAction": "Verify the sanity of the server.  This is probably a logic error.  If you can not work out what happened, ask the Egeria community for help."
}
 
OMAG-REPOSITORY-HANDLER-500-001 An unexpected error org.odpi.openmetadata.repositoryservices.ffdc.exception.RepositoryErrorException was returned to reclassifyEntity(LatestChange) by the metadata server during registerGovernanceServiceWithEngine request for open metadata access service Governance Engine OMAS on server cocoMDS1; message was OMRS-REPOSITORY-400-056 The OMRS repository connector operation updateEntityClassification (EntityProxy) from the OMRS Enterprise Repository Services can not locate the home repository connector for instance LatestChange located in metadata collection 94645157-2a9d-4d32-9170-e6f2747a0ceb
 * The system is unable to process the request because of an internal error.
 * Verify the sanity of the server.  This is probably a logic error.  If you can not work out what happened, ask the Egeria community for help.
Service registered as: copy-file

Expected Behavior

governance engine registration passes

Steps To Reproduce

Run the incomplete/unsupported notebook 'automated curation' in a coco pharma tutorial environment

Environment

- Egeria:3.8

Any Further Information?

No response

@planetf1 planetf1 added bug Something isn't working triage New bug/issue which needs checking & assigning labels May 26, 2022
@planetf1 planetf1 mentioned this issue May 27, 2022
30 tasks
@planetf1
Copy link
Member Author

May be related to #6552

This scenario works correctly with the local graph repository
It fails (tested twice) with in-memory repository

@planetf1
Copy link
Member Author

planetf1 commented May 30, 2022

On a third try it passed. (step by step)

Ran again (run all) - failed

Similar to CTS, this looks like it may be a timing related issue

@planetf1
Copy link
Member Author

Here's a fragment of the log from an error :

Mon May 30 10:41:53 GMT 2022 cocoMDS1 Event OMRS-AUDIT-8009 The Open Metadata Repository Services (OMRS) has sent event of type Instance Event to the cohort topic cocoMDS1.openmetadata.repositoryservices.enterprise.cocoMDS1.OMRSTopic
Mon May 30 10:41:53 GMT 2022 cocoMDS4 Event OMRS-AUDIT-8009 The Open Metadata Repository Services (OMRS) has sent event of type Instance Event to the cohort topic cocoMDS4.openmetadata.repositoryservices.enterprise.cocoMDS4.OMRSTopic
Mon May 30 10:41:53 GMT 2022 cocoMDS4 Exception OMAG-REPOSITORY-HANDLER-0003 An unexpected error org.odpi.openmetadata.repositoryservices.ffdc.exception.RepositoryErrorException was returned to validateEntityGUID by the metadata server during entityOfInterest request for open metadata access service Asset Consumer OMAS on server cocoMDS4; message was OMRS-ENTERPRISE-REPOSITORY-503-001 There are no open metadata repositories available for access service Asset Consumer OMAS.
Mon May 30 10:41:53 GMT 2022 cocoMDS4 Exception OMAG-REPOSITORY-HANDLER-0003 Supplementary information: log record id b8e05a14-a609-42ce-9e52-ff6de78893e0 org.odpi.openmetadata.repositoryservices.ffdc.exception.RepositoryErrorException returned message of OMRS-ENTERPRISE-REPOSITORY-503-001 There are no open metadata repositories available for access service Asset Consumer OMAS. and stacktrace of 
OCFCheckedExceptionBase{reportedHTTPCode=503, reportingClassName='org.odpi.openmetadata.repositoryservices.enterprise.repositoryconnector.EnterpriseOMRSRepositoryConnector', reportingActionDescription='getEntityDetail', reportedErrorMessage='OMRS-ENTERPRISE-REPOSITORY-503-001 There are no open metadata repositories available for access service Asset Consumer OMAS.', reportedErrorMessageId='OMRS-ENTERPRISE-REPOSITORY-503-001', reportedErrorMessageParameters=[Asset Consumer OMAS], reportedSystemAction='The configuration for the server is set up so there is no local repository and no remote repositories connected through the open metadata repository cohorts.  This may because of one or more configuration errors.', reportedUserAction='Retry the request once the configuration is changed.', reportedCaughtException=null, reportedCaughtExceptionClassName='null', relatedProperties=null}
        at org.odpi.openmetadata.repositoryservices.enterprise.repositoryconnector.EnterpriseOMRSRepositoryConnector.getCohortConnectors(EnterpriseOMRSRepositoryConnector.java:480)
        at org.odpi.openmetadata.repositoryservices.enterprise.repositoryconnector.EnterpriseOMRSMetadataCollection.getEntityDetail(EnterpriseOMRSMetadataCollection.java:993)
        at org.odpi.openmetadata.accessservices.assetconsumer.outtopic.AssetConsumerOMRSTopicListener.entityOfInterest(AssetConsumerOMRSTopicListener.java:759)
        at org.odpi.openmetadata.accessservices.assetconsumer.outtopic.AssetConsumerOMRSTopicListener.processClassifiedEntityEvent(AssetConsumerOMRSTopicListener.java:162)
        at org.odpi.openmetadata.repositoryservices.connectors.omrstopic.OMRSTopicListenerBase.processInstanceEvent(OMRSTopicListenerBase.java:516)
        at org.odpi.openmetadata.repositoryservices.connectors.omrstopic.OMRSTopicListenerWrapper.processInstanceEvent(OMRSTopicListenerWrapper.java:165)
        at org.odpi.openmetadata.repositoryservices.connectors.omrstopic.OMRSTopicConnector.processOMRSEvent(OMRSTopicConnector.java:576)
        at org.odpi.openmetadata.repositoryservices.connectors.omrstopic.OMRSTopicConnector.lambda$processEvent$0(OMRSTopicConnector.java:513)
Mon May 30 10:41:55 GMT 2022 cocoMDS1 Event OMRS-AUDIT-8009 The Open Metadata Repository Services (OMRS) has sent event of type Instance Event to the cohort topic cocoMDS1.openmetadata.repositoryservices.enterprise.cocoMDS1.OMRSTopic
Mon May 30 10:41:55 GMT 2022 cocoMDS1 Event OMRS-AUDIT-8009 The Open Metadata Repository Services (OMRS) has sent event of type Instance Event to the cohort topic egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.instances
Mon May 30 10:41:55 GMT 2022 cocoMDS1 Exception OMAG-REPOSITORY-HANDLER-0003 An unexpected error org.odpi.openmetadata.repositoryservices.ffdc.exception.RepositoryErrorException was returned to reclassifyEntity(LatestChange) by the metadata server during registerGovernanceServiceWithEngine request for open metadata access service Governance Engine OMAS on server cocoMDS1; message was OMRS-REPOSITORY-400-056 The OMRS repository connector operation updateEntityClassification (EntityProxy) from the OMRS Enterprise Repository Services can not locate the home repository connector for instance LatestChange located in metadata collection 80e6b25c-a00c-4461-9237-7ef01bc54b9b
Mon May 30 10:41:55 GMT 2022 cocoMDS1 Exception OMAG-REPOSITORY-HANDLER-0003 Supplementary information: log record id 08e12d70-44a1-4a9f-82aa-329a6b2c3570 org.odpi.openmetadata.repositoryservices.ffdc.exception.RepositoryErrorException returned message of OMRS-REPOSITORY-400-056 The OMRS repository connector operation updateEntityClassification (EntityProxy) from the OMRS Enterprise Repository Services can not locate the home repository connector for instance LatestChange located in metadata collection 80e6b25c-a00c-4461-9237-7ef01bc54b9b and stacktrace of 
OCFCheckedExceptionBase{reportedHTTPCode=400, reportingClassName='org.odpi.openmetadata.repositoryservices.enterprise.repositoryconnector.EnterpriseOMRSRepositoryConnector', reportingActionDescription='updateEntityClassification (EntityProxy)', reportedErrorMessage='OMRS-REPOSITORY-400-056 The OMRS repository connector operation updateEntityClassification (EntityProxy) from the OMRS Enterprise Repository Services can not locate the home repository connector for instance LatestChange located in metadata collection 80e6b25c-a00c-4461-9237-7ef01bc54b9b', reportedErrorMessageId='OMRS-REPOSITORY-400-056', reportedErrorMessageParameters=[updateEntityClassification (EntityProxy), LatestChange, 80e6b25c-a00c-4461-9237-7ef01bc54b9b], reportedSystemAction='The system is unable to proceed with processing this request.', reportedUserAction='This error suggests there is a logic error in either this repository, or the home repository for the instance.  Raise a Github issue in order to get this fixed.', reportedCaughtException=null, reportedCaughtExceptionClassName='null', relatedProperties=null}
        at org.odpi.openmetadata.repositoryservices.enterprise.repositoryconnector.EnterpriseOMRSRepositoryConnector.getHomeConnector(EnterpriseOMRSRepositoryConnector.java:277)
        at org.odpi.openmetadata.repositoryservices.enterprise.repositoryconnector.EnterpriseOMRSRepositoryConnector.getHomeMetadataCollection(EnterpriseOMRSRepositoryConnector.java:179)
        at org.odpi.openmetadata.repositoryservices.enterprise.repositoryconnector.EnterpriseOMRSMetadataCollection.updateEntityClassification(EnterpriseOMRSMetadataCollection.java:3628)
Mon May 30 10:41:55 GMT 2022 cocoMDS1 Event OMRS-AUDIT-8006 Processing incoming event of type NewRelationshipEvent for instance 67388631-076e-490c-9e65-8c09355222aa from: OMRSEventOriginator{metadataCollectionId='80e6b25c-a00c-4461-9237-7ef01bc54b9b', serverName='cocoMDS2', serverType='Metadata Access Store', organizationName='Coco Pharmaceuticals'}
Mon May 30 10:41:55 GMT 2022 cocoMDS1 Event OMRS-AUDIT-8009 The Open Metadata Repository Services (OMRS) has sent event of type Instance Event to the cohort topic cocoMDS1.openmetadata.repositoryservices.enterprise.cocoMDS1.OMRSTopic
Mon May 30 10:41:55 GMT 2022 cocoMDS4 Event OMRS-AUDIT-8009 The Open Metadata Repository Services (OMRS) has sent event of type Instance Event to the cohort topic cocoMDS4.openmetadata.repositoryservices.enterprise.cocoMDS4.OMRSTopic
Mon May 30 10:41:56 GMT 2022 cocoMDS4 Event OMRS-AUDIT-8009 The Open Metadata Repository Services (OMRS) has sent event of type Instance Event to the cohort topic cocoMDS4.openmetadata.repositoryservices.enterprise.cocoMDS4.OMRSTopic
Mon May 30 10:41:56 GMT 2022 cocoMDS1 Event OMRS-AUDIT-8006 Processing incoming event of type ReclassifiedEntityEvent for instance 5ca61ce9-ecaf-43e9-9c6c-bf4d19668ef6 from: OMRSEventOriginator{metadataCollectionId='80e6b25c-a00c-4461-9237-7ef01bc54b9b', serverName='cocoMDS2', serverType='Metadata Access Store', organizationName='Coco Pharmaceuticals'}
Mon May 30 10:41:56 GMT 2022 cocoMDS1 Event OMRS-AUDIT-8009 The Open Metadata Repository Services (OMRS) has sent event of type Instance Event to the cohort topic cocoMDS1.openmetadata.repositoryservices.enterprise.cocoMDS1.OMRSTopic
Mon May 30 10:41:56 GMT 2022 cocoMDS4 Exception OMAG-REPOSITORY-HANDLER-0003 An unexpected error org.odpi.openmetadata.repositoryservices.ffdc.exception.RepositoryErrorException was returned to validateEntityGUID by the metadata server during entityOfInterest request for open metadata access service Asset Consumer OMAS on server cocoMDS4; message was OMRS-ENTERPRISE-REPOSITORY-503-001 There are no open metadata repositories available for access service Asset Consumer OMAS.
Mon May 30 10:41:56 GMT 2022 cocoMDS4 Exception OMAG-REPOSITORY-HANDLER-0003 Supplementary information: log record id f21e7929-427a-4d11-ac74-469910f5c452 org.odpi.openmetadata.repositoryservices.ffdc.exception.RepositoryErrorException returned message of OMRS-ENTERPRISE-REPOSITORY-503-001 There are no open metadata repositories available for access service Asset Consumer OMAS. and stacktrace of 
OCFCheckedExceptionBase{reportedHTTPCode=503, reportingClassName='org.odpi.openmetadata.repositoryservices.enterprise.repositoryconnector.EnterpriseOMRSRepositoryConnector', reportingActionDescription='getEntityDetail', reportedErrorMessage='OMRS-ENTERPRISE-REPOSITORY-503-001 There are no open metadata repositories available for access service Asset Consumer OMAS.', reportedErrorMessageId='OMRS-ENTERPRISE-REPOSITORY-503-001', reportedErrorMessageParameters=[Asset Consumer OMAS], reportedSystemAction='The configuration for the server is set up so there is no local repository and no remote repositories connected through the open metadata repository cohorts.  This may because of one or more configuration errors.', reportedUserAction='Retry the request once the configuration is changed.', reportedCaughtException=null, reportedCaughtExceptionClassName='null', relatedProperties=null}
        at org.odpi.openmetadata.repositoryservices.enterprise.repositoryconnector.EnterpriseOMRSRepositoryConnector.getCohortConnectors(EnterpriseOMRSRepositoryConnector.java:480)
        at org.odpi.openmetadata.repositoryservices.enterprise.repositoryconnector.EnterpriseOMRSMetadataCollection.getEntityDetail(EnterpriseOMRSMetadataCollection.java:993)
        at org.odpi.openmetadata.accessservices.assetconsumer.outtopic.AssetConsumerOMRSTopicListener.entityOfInterest(AssetConsumerOMRSTopicListener.java:759)
        at org.odpi.openmetadata.accessservices.assetconsumer.outtopic.AssetConsumerOMRSTopicListener.processReclassifiedEntityEvent(AssetConsumerOMRSTopicListener.java:396)
        at org.odpi.openmetadata.repositoryservices.connectors.omrstopic.OMRSTopicListenerBase.processInstanceEvent(OMRSTopicListenerBase.java:540)
        at org.odpi.openmetadata.repositoryservices.connectors.omrstopic.OMRSTopicListenerWrapper.processInstanceEvent(OMRSTopicListenerWrapper.java:165)
        at org.odpi.openmetadata.repositoryservices.connectors.omrstopic.OMRSTopicConnector.processOMRSEvent(OMRSTopicConnector.java:576)
        at org.odpi.openmetadata.repositoryservices.connectors.omrstopic.OMRSTopicConnector.lambda$processEvent$0(OMRSTopicConnector.java:513)
Mon May 30 10:41:57 GMT 2022 cocoMDS4 Event OMRS-AUDIT-8009 The Open Metadata Repository Services (OMRS) has sent event of type Instance Event to the cohort topic cocoMDS4.openmetadata.repositoryservices.enterprise.cocoMDS4.OMRSTopic
Mon May 30 10:42:26 GMT 2022 cocoMDS1 Event OMRS-AUDIT-8006 Processing incoming event of type NewEntityEvent for instance db02c05d-5244-4cd8-bcec-60101133c2a6 from: OMRSEventOriginator{metadataCollectionId='80e6b25c-a00c-4461-9237-7ef01bc54b9b', serverName='co
coMDS2', serverType='Metadata Access Store', organizationName='Coco Pharmaceuticals'}
Mon May 30 10:42:26 GMT 2022 cocoMDS1 Event OMRS-AUDIT-8009 The Open Metadata Repository Services (OMRS) has sent event of type Instance Event to the cohort topic cocoMDS1.openmetadata.repositoryservices.enterprise.cocoMDS1.OMRSTopic

@planetf1
Copy link
Member Author

planetf1 commented May 30, 2022

Also note that AssetConsumer OMAS is reporting continual errors (more at https://gist.github.com/c069ada7a9c9e3767322a3a8bb9512a3 )

Full logs at https://1drv.ms/u/s!ApVqcIDT57-fmskFDq72ALtm-VVKDA?e=4DolJ3

However this also occurs with the data catalog notebook.

Mon May 30 11:24:00 GMT 2022 cocoMDS4 Exception OMAG-REPOSITORY-HANDLER-0003 An unexpected error org.odpi.openmetadata.repositoryservices.ffdc.exception.RepositoryErrorException was returned to validateEntityGUID by the metadata server during entityOfInterest 
request for open metadata access service Asset Consumer OMAS on server cocoMDS4; message was OMRS-ENTERPRISE-REPOSITORY-503-001 There are no open metadata repositories available for access service Asset Consumer OMAS.
Mon May 30 11:24:00 GMT 2022 cocoMDS4 Exception OMAG-REPOSITORY-HANDLER-0003 Supplementary information: log record id 43afdfaf-b8cf-4a2a-8d81-92cfb23e7d89 org.odpi.openmetadata.repositoryservices.ffdc.exception.RepositoryErrorException returned message of OMRS
-ENTERPRISE-REPOSITORY-503-001 There are no open metadata repositories available for access service Asset Consumer OMAS. and stacktrace of 
OCFCheckedExceptionBase{reportedHTTPCode=503, reportingClassName='org.odpi.openmetadata.repositoryservices.enterprise.repositoryconnector.EnterpriseOMRSRepositoryConnector', reportingActionDescription='getEntityDetail', reportedErrorMessage='OMRS-ENTERPRISE-RE
POSITORY-503-001 There are no open metadata repositories available for access service Asset Consumer OMAS.', reportedErrorMessageId='OMRS-ENTERPRISE-REPOSITORY-503-001', reportedErrorMessageParameters=[Asset Consumer OMAS], reportedSystemAction='The configurat
ion for the server is set up so there is no local repository and no remote repositories connected through the open metadata repository cohorts.  This may because of one or more configuration errors.', reportedUserAction='Retry the request once the configuratio
n is changed.', reportedCaughtException=null, reportedCaughtExceptionClassName='null', relatedProperties=null}
        at org.odpi.openmetadata.repositoryservices.enterprise.repositoryconnector.EnterpriseOMRSRepositoryConnector.getCohortConnectors(EnterpriseOMRSRepositoryConnector.java:480)
        at org.odpi.openmetadata.repositoryservices.enterprise.repositoryconnector.EnterpriseOMRSMetadataCollection.getEntityDetail(EnterpriseOMRSMetadataCollection.java:993)
        at org.odpi.openmetadata.commonservices.repositoryhandler.RepositoryHandler.validateEntityGUID(RepositoryHandler.java:383)
        at org.odpi.openmetadata.commonservices.repositoryhandler.RepositoryHandler.getEntityByGUID(RepositoryHandler.java:3516)
        at org.odpi.openmetadata.commonservices.generichandlers.OpenMetadataAPIGenericHandler.getEntityFromRepository(OpenMetadataAPIGenericHandler.java:9401)
        at org.odpi.openmetadata.accessservices.assetconsumer.outtopic.AssetConsumerOMRSTopicListener.entityOfInterest(AssetConsumerOMRSTopicListener.java:759)

Whilst cocoMDS4 does not have a local repository, it does have connected remote repositories

@planetf1
Copy link
Member Author

This seems to work in 3.10 - but even in 3.8/9 it was intermittent.
Still seeing the unexpected errors from AssetConsumer

@planetf1 planetf1 mentioned this issue Jul 1, 2022
30 tasks
@planetf1 planetf1 changed the title [BUG] Problem registering governance engine [BUG] Problem registering governance engine / Asset consumer exceptions Jul 1, 2022
@planetf1
Copy link
Member Author

planetf1 commented Jul 1, 2022

Querying remote members for cocoMDS4:

{
    "class": "CohortMembershipListResponse",
    "relatedHTTPCode": 200,
    "offset": 0,
    "pageSize": 0
}

This also shows up in DINO with an empty cohort list:
Screenshot 2022-07-01 at 13 55 55

These are all consistent with the exception seen in the log from AssetConsumer (which is enabled on cocoMDS4) - it has no chance of retrieving anything ..

cocoMDS5 (proxy) has the same issue.

Compare with cocoMDS2 which shows correctly >>> https://gist.github.com/f6534a9c706684c64b40b6af0fba7859

This means cocoMDS4 cannot serve data lake users (it's role)

@planetf1 planetf1 self-assigned this Jul 1, 2022
@planetf1
Copy link
Member Author

planetf1 commented Jul 1, 2022

The cohort membership issue may be related to #6353 -- this was a timing issue which affected CTS.

@planetf1
Copy link
Member Author

planetf1 commented Jul 1, 2022

Repeated test - jars locally. Issues does not occur
Repeated on rancher desktop. Issue does occur
Repeated in cloud, issues does occur
Added delay between server starts - still occurs
restart cocoMDS4 when in 'bad' state, restarts (no config change) with full cohort membership correct

This would support this as a repeat of the issue mentioned above - timing differences on cloud (faster) vs local (slow mac). Additionally kafka/zookeeper differ - strimzi vs homebrew

Repeated on rancher desktop. Issue does occur

cocoMDS4 is nearly the last server to start (OLS and MDSx after), and the last member of cocoCohort - perhaps an issue with the reconfiguration. Will look at kafka message sequence

@planetf1
Copy link
Member Author

planetf1 commented Jul 5, 2022

This same issue occurs with the coco labs on k8s with 3.10

For example if we look at the cocoMDS1 registrations:

➜  cohort cat datalake.log core.log dev.log | grep ' Cohort ' | grep ' cocoCohort ' | grep cocoMDS1  | sort
Tue Jul 05 13:23:42 GMT 2022 cocoMDS1 Startup OMRS-AUDIT-0030 Registering the Cohort to Enterprise event consumer with the cocoCohort cohort inbound event manager
Tue Jul 05 13:23:43 GMT 2022 cocoMDS1 Cohort OMRS-AUDIT-0060 Registering with open metadata repository cohort cocoCohort using metadata collection id 94c206b6-15c7-456f-a200-f2304a95f24b
Tue Jul 05 13:23:45 GMT 2022 cocoMDS2 Cohort OMRS-AUDIT-0106 Refreshing registration with open metadata repository cohort cocoCohort using metadata collection id f217c2b7-9596-43ba-924e-ea397aedee52 at the request of server cocoMDS1
Tue Jul 05 13:23:45 GMT 2022 cocoMDS2 Cohort OMRS-AUDIT-0110 A new registration request has been received for cohort cocoCohort from server cocoMDS1 that hosts metadata collection 94c206b6-15c7-456f-a200-f2304a95f24b
Tue Jul 05 13:23:45 GMT 2022 cocoMDS3 Cohort OMRS-AUDIT-0106 Refreshing registration with open metadata repository cohort cocoCohort using metadata collection id 4b99053b-a129-4160-9273-5b92cc0a83c3 at the request of server cocoMDS1
Tue Jul 05 13:23:45 GMT 2022 cocoMDS3 Cohort OMRS-AUDIT-0110 A new registration request has been received for cohort cocoCohort from server cocoMDS1 that hosts metadata collection 94c206b6-15c7-456f-a200-f2304a95f24b
Tue Jul 05 13:23:45 GMT 2022 cocoMDS5 Cohort OMRS-AUDIT-0106 Refreshing registration with open metadata repository cohort cocoCohort using metadata collection id 6169a82f-858d-4779-8d7d-a4c8bb39d7ea at the request of server cocoMDS1
Tue Jul 05 13:23:45 GMT 2022 cocoMDS5 Cohort OMRS-AUDIT-0110 A new registration request has been received for cohort cocoCohort from server cocoMDS1 that hosts metadata collection 94c206b6-15c7-456f-a200-f2304a95f24b
Tue Jul 05 13:23:45 GMT 2022 cocoMDS6 Cohort OMRS-AUDIT-0106 Refreshing registration with open metadata repository cohort cocoCohort using metadata collection id 31fa09cf-6408-4de0-b356-b44cab9c5f27 at the request of server cocoMDS1
Tue Jul 05 13:23:45 GMT 2022 cocoMDS6 Cohort OMRS-AUDIT-0110 A new registration request has been received for cohort cocoCohort from server cocoMDS1 that hosts metadata collection 94c206b6-15c7-456f-a200-f2304a95f24b
Tue Jul 05 13:23:51 GMT 2022 cocoMDS1 Cohort OMRS-AUDIT-0112 A re-registration request has been received for cohort cocoCohort from server cocoMDS3 that hosts metadata collection 4b99053b-a129-4160-9273-5b92cc0a83c3
Tue Jul 05 13:23:51 GMT 2022 cocoMDS1 Cohort OMRS-AUDIT-0112 A re-registration request has been received for cohort cocoCohort from server cocoMDS6 that hosts metadata collection 31fa09cf-6408-4de0-b356-b44cab9c5f27
Tue Jul 05 13:23:52 GMT 2022 cocoMDS1 Cohort OMRS-AUDIT-0112 A re-registration request has been received for cohort cocoCohort from server cocoMDS2 that hosts metadata collection f217c2b7-9596-43ba-924e-ea397aedee52
➜  cohort Handling connection for 8888
cat datalake.log core.log dev.log | grep ' Cohort ' | grep ' cocoCohort ' | grep cocoMDS4  | sort
Tue Jul 05 13:23:46 GMT 2022 cocoMDS4 Startup OMRS-AUDIT-0030 Registering the Cohort to Enterprise event consumer with the cocoCohort cohort inbound event manager
Tue Jul 05 13:23:48 GMT 2022 cocoMDS5 Cohort OMRS-AUDIT-0106 Refreshing registration with open metadata repository cohort cocoCohort using metadata collection id 6169a82f-858d-4779-8d7d-a4c8bb39d7ea at the request of server cocoMDS4
Tue Jul 05 13:23:50 GMT 2022 cocoMDS3 Cohort OMRS-AUDIT-0106 Refreshing registration with open metadata repository cohort cocoCohort using metadata collection id 4b99053b-a129-4160-9273-5b92cc0a83c3 at the request of server cocoMDS4
Tue Jul 05 13:23:51 GMT 2022 cocoMDS2 Cohort OMRS-AUDIT-0106 Refreshing registration with open metadata repository cohort cocoCohort using metadata collection id f217c2b7-9596-43ba-924e-ea397aedee52 at the request of server cocoMDS4
Tue Jul 05 13:23:51 GMT 2022 cocoMDS6 Cohort OMRS-AUDIT-0106 Refreshing registration with open metadata repository cohort cocoCohort using metadata collection id 31fa09cf-6408-4de0-b356-b44cab9c5f27 at the request of server cocoMDS4

In this case cocoMDS2 startup, registers, refreshes it's registration, and other servers in the cohort respond

Interestingly cocoMDS5 (proxy) does not seem to send out a re-registration request

Looking then at activity around cocoMDS4:

cat datalake.log core.log dev.log | grep ' Cohort ' | grep ' cocoCohort ' | grep cocoMDS4  | sort
Tue Jul 05 13:23:46 GMT 2022 cocoMDS4 Startup OMRS-AUDIT-0030 Registering the Cohort to Enterprise event consumer with the cocoCohort cohort inbound event manager
Tue Jul 05 13:23:48 GMT 2022 cocoMDS5 Cohort OMRS-AUDIT-0106 Refreshing registration with open metadata repository cohort cocoCohort using metadata collection id 6169a82f-858d-4779-8d7d-a4c8bb39d7ea at the request of server cocoMDS4
Tue Jul 05 13:23:50 GMT 2022 cocoMDS3 Cohort OMRS-AUDIT-0106 Refreshing registration with open metadata repository cohort cocoCohort using metadata collection id 4b99053b-a129-4160-9273-5b92cc0a83c3 at the request of server cocoMDS4
Tue Jul 05 13:23:51 GMT 2022 cocoMDS2 Cohort OMRS-AUDIT-0106 Refreshing registration with open metadata repository cohort cocoCohort using metadata collection id f217c2b7-9596-43ba-924e-ea397aedee52 at the request of server cocoMDS4
Tue Jul 05 13:23:51 GMT 2022 cocoMDS6 Cohort OMRS-AUDIT-0106 Refreshing registration with open metadata repository cohort cocoCohort using metadata collection id 31fa09cf-6408-4de0-b356-b44cab9c5f27 at the request of server cocoMDS4

Quite different - no audit messages relating to the reregistration requests from elsewhere being received - though it is
correct that cocoMDS4 does not try and register with the cohort, since it doesn't have a local metadata collection

It does still send a registration refresh (as can be seen from the other servers responding to it)

@planetf1
Copy link
Member Author

planetf1 commented Jul 5, 2022

It appears as cocoMDS4 is not responding to registration events in this case ( NEW_MEMBER_IN_COHORT / processRegistrationEvent)

@planetf1
Copy link
Member Author

planetf1 commented Jul 6, 2022

Checking kafka offsets:

sh-4.4$ ./kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe  --all-groups | grep cocoCohort | grep regist
0d193643-fb93-460f-a24b-53a949c6f118 egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration 0          25              25              0               consumer-0d193643-fb93-460f-a24b-53a949c6f118-55-7621c98a-e708-477b-b86e-17400d811db0 /172.17.15.18   consumer-0d193643-fb93-460f-a24b-53a949c6f118-55
2393ba8a-b941-4691-aad2-40ff23338ecf egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration 0          25              25              0               consumer-2393ba8a-b941-4691-aad2-40ff23338ecf-37-f5a003d1-7982-4613-9305-659eceb84825 /172.17.15.18   consumer-2393ba8a-b941-4691-aad2-40ff23338ecf-37
2da14d01-7588-453b-8a40-adf514d41b79 egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration 0          25              25              0               consumer-2da14d01-7588-453b-8a40-adf514d41b79-61-748fcf03-8eea-4ec9-a1bd-d474fabc8838 /172.17.15.18   consumer-2da14d01-7588-453b-8a40-adf514d41b79-61
495a2abb-5cf1-4a1c-9ae4-520131d517c6 egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration 0          25              25              0               consumer-495a2abb-5cf1-4a1c-9ae4-520131d517c6-25-2a7a5946-6830-4d17-939a-6526b8726123 /172.17.15.93   consumer-495a2abb-5cf1-4a1c-9ae4-520131d517c6-25
755d4080-0c54-46d7-80e9-06ff6b92a76d egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration 0          25              25              0               consumer-755d4080-0c54-46d7-80e9-06ff6b92a76d-1-fb711259-8fc3-40aa-aa9a-aa6069076ec6 /172.17.15.18   consumer-755d4080-0c54-46d7-80e9-06ff6b92a76d-1
b82030b2-db30-4654-b55b-4908ac0d53ed egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration 0          25              25              0               consumer-b82030b2-db30-4654-b55b-4908ac0d53ed-1-fe5c93f5-0e33-48b1-8a73-8d1b3014eee2 /172.17.15.93   consumer-b82030b2-db30-4654-b55b-4908ac0d53ed-1

As can be seen above, all consumer groups have 0 lag, so the clients are up to date with all.

One possible scenario could be that in cocoMDS4 we:

  • Start the producer
  • Sent out the refresh request
  • other coco servers respond quickly (k8s, plenty of resource/nodes) -> let's say offset goes to '25'
  • start the consumer, with offset defaulting to current offset (25) (Note: this is the point we actually become a kafka consumer.. not the point at which our connector thread is started)
  • don't see any refresh events, since the messages are 'older' than the current offset
  • cocoMDS4 remains unaware of other servers UNTIL it's restarted, or some other cohort event occurs

This is less likely to occur with metadata servers since there are more events are sent and received.

This also explains why this is only seen in some situations, and why random delays can affect the behaviour as different servers become synchronized

Finally, once a cohort is established, and topics are all setup, offsets management from then on will mean we don't hit this issue.

The possible fixes would include

  • Start the consumer earlier (or ->)
  • Send the refresh event later
  • try and query the offset early on, and use it to set offset later
  • Start from a different offset - but where? beginning could be a month ago! subtracting a random number is very ad-hoc. Or start scrolling backwards? complex. There is KafkaConsumer.offsetsForTimes which could search by timestamp

@planetf1
Copy link
Member Author

planetf1 commented Jul 6, 2022

Another possible scenario - we are receiving the messages ok, but not managing the registry store correctly.

Looking at logs we see the cohort registry store is being created multiple times ...?

cohort cat datalake.log core.log dev.log | grep 'Cohort' | grep 'Creating new cohort registry store' | grep cocoCohort
Tue Jul 05 13:23:42 GMT 2022 cocoMDS1 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS1/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:43 GMT 2022 cocoMDS1 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS1/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:43 GMT 2022 cocoMDS1 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS1/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:46 GMT 2022 cocoMDS4 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS4/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:47 GMT 2022 cocoMDS4 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS4/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:47 GMT 2022 cocoMDS4 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS4/cohorts/cocoCohort.registrystore
Tue Jul 05 13:27:15 GMT 2022 cocoMDS4 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS4/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:13 GMT 2022 cocoMDS2 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS2/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:14 GMT 2022 cocoMDS2 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS2/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:14 GMT 2022 cocoMDS2 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS2/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:21 GMT 2022 cocoMDS3 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS3/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:21 GMT 2022 cocoMDS3 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS3/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:21 GMT 2022 cocoMDS3 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS3/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:26 GMT 2022 cocoMDS5 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS5/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:27 GMT 2022 cocoMDS5 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS5/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:27 GMT 2022 cocoMDS5 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS5/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:27 GMT 2022 cocoMDS6 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS6/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:27 GMT 2022 cocoMDS6 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS6/cohorts/cocoCohort.registrystore
Tue Jul 05 13:23:27 GMT 2022 cocoMDS6 Cohort OCF-FILE-REGISTRY-STORE-CONNECTOR-0115 Creating new cohort registry store ./data/servers/cocoMDS6/cohorts/cocoCohort.registrystore

@planetf1
Copy link
Member Author

planetf1 commented Jul 6, 2022

In fact the audit event is misleading. The audit log message is raised when we fail to read from the file. However it is NOT created at this point. In fact it appears as if by 13:27:15 we still don't have a valid store saved - this is many minutes after any registration events would have been seen, and confirmed by exceptions in the debug log.

Creates of the registry can be seen in the logs (debug) FileBasedRegistryStoreConnector : Writing cohort registry store. This is seen for the other coco services as the cohort configuration is learnt, but at no time is the store created for cocoMDS4 - we just see continual read failures (hence reporting empty membership)

this could be cause or effect - either we have omitted to write, or the root cause remains not receiving/processing the registration events

@planetf1
Copy link
Member Author

planetf1 commented Jul 6, 2022

Confirmation that cocoMDS4 sees it's own refresh event - so listener is definately active:

Jul 5 15:16:41  lab-odpi-egeria-lab-datalake-0  egeria DEBUG  1 --- [RSTopicListener] o.o.o.r.e.OMRSEventListener              : Processing registry event: OMRSRegistryEvent{registryEventType=OMRSRegistryEventType{ordinal=2, name='RefreshRegistrationRequest', description='Requests that the other servers in the cohort send re-registration events.'}, registrationTimestamp=null, metadataCollectionName=null, remoteConnection=null, errorCode=OMRSRegistryEventErrorCode{ordinal=0, name='No Error', description='There has been no error detected and so the error code is not in use.', encoding=null}, eventTimestamp=Tue Jul 05 14:16:40 GMT 2022, eventDirection=OMRSEventDirection{ordinal=1, name='Inbound Event ', description='Event from a remote member of the open metadata repository cluster.'}, eventCategory=OMRSEventCategory{ordinal=1, name='Registry Event', description='Event used to manage the membership of the metadata repository cohort'}, eventOriginator=OMRSEventOriginator{metadataCollectionId='null', serverName='cocoMDS4', serverType='Metadata Access Point', organizationName='Coco Pharmaceuticals'}, genericErrorCode=null, errorMessage='null', targetMetadataCollectionId='null', targetRemoteConnection=null, targetTypeDefSummary=null, targetAttributeTypeDef=null, targetInstanceGUID='null', otherOrigin=null, otherMetadataCollectionId='null', otherTypeDefSummary=null, otherTypeDef=null, otherAttributeTypeDef=null, otherInstanceGUID='null'}

And that cocoMDS4 sees refreshes from other cohort members (here is cocoMDS3):

Jul 5 15:16:43  lab-odpi-egeria-lab-datalake-0  egeria DEBUG  1 --- [RSTopicListener] o.o.o.r.e.OMRSEventListener              : Processing registry event: OMRSRegistryEvent{registryEventType=OMRSRegistryEventType{ordinal=3, name='ReRegistrationEvent', description='Refreshes the other servers in the cohort with the local server's configuration.'}, registrationTimestamp=Tue Jul 05 14:16:00 GMT 2022, metadataCollectionName=Research Catalog, remoteConnection=Connection{displayName='null', description='null', connectorType=ConnectorType{displayName='REST Cohort Member Client Connector', description='Cohort member client connector that provides access to open metadata located in a remote repository via REST calls.', supportedAssetTypeName='null', expectedDataFormat='null', connectorProviderClassName='org.odpi.openmetadata.adapters.repositoryservices.rest.repositoryconnector.OMRSRESTRepositoryConnectorProvider', connectorFrameworkName='null', connectorInterfaceLanguage='null', connectorInterfaces=null, targetTechnologySource='null', targetTechnologyName='null', targetTechnologyInterfaces=null, targetTechnologyVersions=null, recognizedAdditionalProperties=null, recognizedConfigurationProperties=null, qualifiedName='Egeria:OMRSRepositoryConnector:CohortMemberClient:REST', additionalProperties=null, meanings=null, securityTags=null, searchKeywords=null, latestChange='null', latestChangeDetails=null, confidentialityGovernanceClassification=null, confidenceGovernanceClassification=null, criticalityGovernanceClassification=null, retentionGovernanceClassification=null, type=ElementType{elementTypeId='954421eb-33a6-462d-a8ca-b5709a1bd0d4', elementTypeName='ConnectorType', elementSuperTypeNames=null, elementTypeVersion=1, elementTypeDescription='A set of properties describing a type of connector.', elementSourceServer='null', elementOrigin=ElementOrigin{originCode=1, originName='Local to cohort', originDescription='The element is being maintained within one of the local cohort members. The metadata collection id is for one of the repositories in the cohort. This metadata collection id identifies the home repository for this element. '}, elementMetadataCollectionId='null', elementMetadataCollectionName='null', elementLicense='null', status=null, elementCreatedBy='null', elementUpdatedBy='null', elementMaintainedBy=null, elementCreateTime=null, elementUpdateTime=null, elementVersion=0, mappingProperties=null, headerVersion=0}, GUID='75ea56d1-656c-43fb-bc0c-9d35c5553b9e', URL='null', classifications=null, extendedProperties=null, headerVersion=0}, endpoint=Endpoint{displayName='null', description='null', address='https://lab-core:9443/servers/cocoMDS3', protocol='null', encryptionMethod='null', qualifiedName='null', additionalProperties=null, type=null, guid='null', url='null', classifications=null}, userId='null', encryptedPassword='null', clearPassword='null', configurationProperties=null, securedProperties=null, assetSummary='null', qualifiedName='null', additionalProperties=null, meanings=null, securityTags=null, searchKeywords=null, latestChange='null', latestChangeDetails=null, confidentialityGovernanceClassification=null, confidenceGovernanceClassification=null, criticalityGovernanceClassification=null, retentionGovernanceClassification=null, type=null, guid='null', url='null', classifications=null, extendedProperties=null, qualifiedName='null', additionalProperties=null, meanings=null, securityTags=null, searchKeywords=null, latestChange='null', latestChangeDetails=null, confidentialityGovernanceClassification=null, confidenceGovernanceClassification=null, criticalityGovernanceClassification=null, retentionGovernanceClassification=null, type=null, GUID='null', URL='null', classifications=null, extendedProperties=null, headerVersion=0}, errorCode=OMRSRegistryEventErrorCode{ordinal=0, name='No Error', description='There has been no error detected and so the error code is not in use.', encoding=null}, eventTimestamp=Tue Jul 05 14:16:42 GMT 2022, eventDirection=OMRSEventDirection{ordinal=1, name='Inbound Event ', description='Event from a remote member of the open metadata repository cluster.'}, eventCategory=OMRSEventCategory{ordinal=1, name='Registry Event', description='Event used to manage the membership of the metadata repository cohort'}, eventOriginator=OMRSEventOriginator{metadataCollectionId='58375858-3666-4ca1-a5f5-50f3fa5048bc', serverName='cocoMDS3', serverType='Metadata Access Store', organizationName='Coco Pharmaceuticals'}, genericErrorCode=null, errorMessage='null', targetMetadataCollectionId='null', targetRemoteConnection=null, targetTypeDefSummary=null, targetAttributeTypeDef=null, targetInstanceGUID='null', otherOrigin=null, otherMetadataCollectionId='null', otherTypeDefSummary=null, otherTypeDef=null, otherAttributeTypeDef=null, otherInstanceGUID='null'}

@mandy-chessell
Copy link
Contributor

Thanks for this excellent analysis. I am thinking that now we have a separate topic for the registration events, we can create more traffic on it without affecting the instance events. I was thinking that members of the cohort could send refresh registration requests at regular intervals which would ensure this timing window is eliminated and we have the basis of a heartbeat mechanism.

@planetf1
Copy link
Member Author

planetf1 commented Jul 6, 2022

The log of the above is slightly misleading.

  • the audit log entry is reported if we get an ioException from the attempt to open the cohort registry file
  • the actual store is created by creating a new instance of CohortMembership, which should then create the file if the file connector is configured

@planetf1
Copy link
Member Author

planetf1 commented Jul 6, 2022

@mandy-chessell A heartbeat could be a useful backstop -- but I think we have a timing window here, just can't quite pin it down yet. there's a lot of log data.

@planetf1
Copy link
Member Author

planetf1 commented Jul 7, 2022

Moved cocoMDS4 over to the 'factory' platform for testing (local change) in order to allow for clearer debugging (since many log (non-audit) entries don't include thread etc making it tricky to track/debug the issue. Problem still occurs.

Proven that the RE_REGISTRATION event is being sent/received by other servers around 10:29:30:

2022-07-07 10:29:29.789 DEBUG 1 --- [pool-3-thread-1] o.a.e.t.k.KafkaOpenMetadataEventProducer : Sending message {0}{"class":"OMRSEventV1","protocolVersionId":"OMRS V1.0","timestamp":1657189768934,"originator":{"metadataCollectionId":"43067357-62a7-4d66-a67f-99bca36241c6","serverName":"cocoMDS1","serverType":"Metadata Access Store","organizationName":"Coco Pharmaceuticals"},"eventCategory":"REGISTRY","registryEventSection":{"registryEventType":"RE_REGISTRATION_EVENT","registrationTimestamp":1657189747367,"metadataCollectionName":"Data Lake Catalog","remoteConnection":{"class":"Connection","headerVersion":0,"connectorType":{"class":"ConnectorType","headerVersion":0,"type":{"class":"ElementType","headerVersion":0,"elementOrigin":"LOCAL_COHORT","elementVersion":0,"elementTypeId":"954421eb-33a6-462d-a8ca-b5709a1bd0d4","elementTypeName":"ConnectorType","elementTypeVersion":1,"elementTypeDescription":"A set of properties describing a type of connector."},"guid":"75ea56d1-656c-43fb-bc0c-9d35c5553b9e","qualifiedName":"Egeria:OMRSRepositoryConnector:CohortMemberClient:REST","displayName":"REST Cohort Member Client Connector","description":"Cohort member client connector that provides access to open metadata located in a remote repository via REST calls.","connectorProviderClassName":"org.odpi.openmetadata.adapters.repositoryservices.rest.repositoryconnector.OMRSRESTRepositoryConnectorProvider"},"endpoint":{"class":"Endpoint","headerVersion":0,"address":"https://lab-datalake:9443/servers/cocoMDS1"}}}}
2022-07-07 10:29:30.787 DEBUG 1 --- [pool-2-thread-1] o.a.e.t.k.KafkaOpenMetadataEventConsumer : Received message: {"class":"OMRSEventV1","protocolVersionId":"OMRS V1.0","timestamp":1657189768826,"originator":{"metadataCollectionId":"87166124-a402-4b5a-a056-04c5134f0afd","serverName":"cocoMDS6","serverType":"Metadata Access Store","organizationName":"Coco Pharmaceuticals"},"eventCategory":"REGISTRY","registryEventSection":{"registryEventType":"RE_REGISTRATION_EVENT","registrationTimestamp":1657189717657,"metadataCollectionName":"Manufacturing Catalog","remoteConnection":{"class":"Connection","headerVersion":0,"connectorType":{"class":"ConnectorType","headerVersion":0,"type":{"class":"ElementType","headerVersion":0,"elementOrigin":"LOCAL_COHORT","elementVersion":0,"elementTypeId":"954421eb-33a6-462d-a8ca-b5709a1bd0d4","elementTypeName":"ConnectorType","elementTypeVersion":1,"elementTypeDescription":"A set of properties describing a type of connector."},"guid":"75ea56d1-656c-43fb-bc0c-9d35c5553b9e","qualifiedName":"Egeria:OMRSRepositoryConnector:CohortMemberClient:REST","displayName":"REST Cohort Member Client Connector","description":"Cohort member client connector that provides access to open metadata located in a remote repository via REST calls.","connectorProviderClassName":"org.odpi.openmetadata.adapters.repositoryservices.rest.repositoryconnector.OMRSRESTRepositoryConnectorProvider"},"endpoint":{"class":"Endpoint","headerVersion":0,"address":"https://lab-core:9443/servers/cocoMDS6"}}}}

And that cocoMDS4 is completing joining the consumer group well before this:

2022-07-07 10:29:20.773 DEBUG 1 --- [pool-8-thread-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-8f032c6b-034e-47de-bb34-ae827c59edb0-7, groupId=8f032c6b-034e-47de-bb34-ae827c59edb0] Received JOIN_GROUP response from node 2147483647 for request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=7, clientId=consumer-8f032c6b-034e-47de-bb34-ae827c59edb0-7, correlationId=5): JoinGroupResponseData(throttleTimeMs=0, errorCode=0

(Occurs for multiple groups up until 10:20:28 -- for the different topics we consume, but the point is all are prior to the above, so the offset should be correct ie prior to the refresh)

Yet the listener appears to start ok for cocoMDS4, but no incoming events are reported:

➜  try5 cat datalake.log | grep "OMRSEventListener" | cut -c1-132
2022-07-07 10:29:05.428 DEBUG 1 --- [nio-9443-exec-8] o.o.o.r.e.OMRSEventListener              : Initialize OMRS Event Listener
2022-07-07 10:29:28.906 DEBUG 1 --- [RSTopicListener] o.o.o.r.e.OMRSEventListener              : Processing registry event: OMRSRegi
2022-07-07 10:29:29.893 DEBUG 1 --- [RSTopicListener] o.o.o.r.e.OMRSEventListener              : Processing registry event: OMRSRegi
2022-07-07 10:29:30.882 DEBUG 1 --- [RSTopicListener] o.o.o.r.e.OMRSEventListener              : Processing registry event: OMRSRegi
2022-07-07 10:29:31.846 DEBUG 1 --- [RSTopicListener] o.o.o.r.e.OMRSEventListener              : Processing registry event: OMRSRegi
2022-07-07 10:29:32.155 DEBUG 1 --- [RSTopicListener] o.o.o.r.e.OMRSEventListener              : Processing registry event: OMRSRegi
➜  try5 cat factory.log | grep "OMRSEventListener" | cut -c1-132 
2022-07-07 10:29:26.563 DEBUG 1 --- [nio-9443-exec-5] o.o.o.r.e.OMRSEventListener              : Initialize OMRS Event Listener
➜  try5 

@planetf1
Copy link
Member Author

planetf1 commented Jul 7, 2022

Further debug shows the listener is working as expected (aside: default poll is 1s which seems short, but is configurable)

As per initial hunch, There is a kafka API call to establish the start point for reading messages

here's some activity from cocoMDS4

2022-07-07 12:33:48.609 DEBUG 1 --- [pool-2-thread-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-25aa73b7-cda5-49e4-a658-156ea40b8d2c-1, groupId=25aa73b7-cda5-49e4-a658-156ea40b8d2c] Fetching committed offsets for partitions: [egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration-0]

2022-07-07 12:33:48.618  INFO 1 --- [pool-2-thread-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-25aa73b7-cda5-49e4-a658-156ea40b8d2c-1, groupId=25aa73b7-cda5-49e4-a658-156ea40b8d2c] Found no committed offset for partition egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration-0

2022-07-07 12:33:48.634 DEBUG 1 --- [pool-2-thread-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-25aa73b7-cda5-49e4-a658-156ea40b8d2c-1, groupId=25aa73b7-cda5-49e4-a658-156ea40b8d2c] Received LIST_OFFSETS response from node 0 for request with header RequestHeader(apiKey=LIST_OFFSETS, apiVersion=7, clientId=consumer-25aa73b7-cda5-49e4-a658-156ea40b8d2c-1, correlationId=10): ListOffsetsResponseData(throttleTimeMs=0, topics=[ListOffsetsTopicResponse(name='egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration', partitions=[ListOffsetsPartitionResponse(partitionIndex=0, errorCode=0, oldStyleOffsets=[], timestamp=-1, offset=25, leaderEpoch=0)])])
2022-07-07 12:33:48.635 DEBUG 1 --- [pool-2-thread-1] o.a.k.c.consumer.internals.Fetcher       : [Consumer clientId=consumer-25aa73b7-cda5-49e4-a658-156ea40b8d2c-1, groupId=25aa73b7-cda5-49e4-a658-156ea40b8d2c] Handling ListOffsetResponse response for egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration-0. Fetched offset 25, timestamp -1
2022-07-07 12:33:48.639 DEBUG 1 --- [pool-2-thread-1] org.apache.kafka.clients.Metadata        : [Consumer clientId=consumer-25aa73b7-cda5-49e4-a658-156ea40b8d2c-1, groupId=25aa73b7-cda5-49e4-a658-156ea40b8d2c] Not replacing existing epoch 0 with new epoch 0 for partition egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration-0
2022-07-07 12:33:48.639  INFO 1 --- [pool-2-thread-1] o.a.k.c.c.internals.SubscriptionState    : [Consumer clientId=consumer-25aa73b7-cda5-49e4-a658-156ea40b8d2c-1, groupId=25aa73b7-cda5-49e4-a658-156ea40b8d2c] Resetting offset for partition egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration-0 to position FetchPosition{offset=25, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[lab-strimzi-kafka-0.lab-strimzi-kafka-brokers.lab.svc:9092 (id: 0 rack: null)], epoch=0}}.
2022-07-07 12:33:48.646 DEBUG 1 --- [pool-2-thread-1] o.a.k.c.consumer.internals.Fetcher       : [Consumer clientId=consumer-25aa73b7-cda5-49e4-a658-156ea40b8d2c-1, groupId=25aa73b7-cda5-49e4-a658-156ea40b8d2c] Added READ_UNCOMMITTED fetch request for partition egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration-0 at position FetchPosition{offset=25, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[lab-strimzi-kafka-0.lab-strimzi-kafka-brokers.lab.svc:9092 (id: 0 rack: null)], epoch=0}} to node lab-strimzi-kafka-0.lab-strimzi-kafka-brokers.lab.svc:9092 (id: 0 rack: null)

So here we

  • find there is no saved offset (expected, new consumer group for this partition)
  • check the current offset and determine it's 25 (THIS IS AFTER THE REFRESH EVENTS!!)
  • Set the offset and start reading from there

Where we detect offsets and then get messages we see behaviour more like:

2022-07-07 12:33:28.404 DEBUG 1 --- [pool-2-thread-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-a16a8b62-31db-4f80-ba90-88312dac01b6-1, groupId=a16a8b62-31db-4f80-ba90-88312dac01b6] Fetching committed offsets for partitions: [egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration-0]

2022-07-07 12:33:28.414  INFO 1 --- [pool-2-thread-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-a16a8b62-31db-4f80-ba90-88312dac01b6-1, groupId=a16a8b62-31db-4f80-ba90-88312dac01b6] Found no committed offset for partition egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration-0

2022-07-07 12:33:28.451 DEBUG 1 --- [pool-2-thread-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-a16a8b62-31db-4f80-ba90-88312dac01b6-1, groupId=a16a8b62-31db-4f80-ba90-88312dac01b6] Received LIST_OFFSETS response from node 0 for request with header RequestHeader(apiKey=LIST_OFFSETS, apiVersion=7, clientId=consumer-a16a8b62-31db-4f80-ba90-88312dac01b6-1, correlationId=10): ListOffsetsResponseData(throttleTimeMs=0, topics=[ListOffsetsTopicResponse(name='egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration', partitions=[ListOffsetsPartitionResponse(partitionIndex=0, errorCode=0, oldStyleOffsets=[], timestamp=-1, offset=18, leaderEpoch=0)])])
2022-07-07 12:33:28.476 DEBUG 1 --- [pool-2-thread-1] o.a.k.c.consumer.internals.Fetcher       : [Consumer clientId=consumer-a16a8b62-31db-4f80-ba90-88312dac01b6-1, groupId=a16a8b62-31db-4f80-ba90-88312dac01b6] Handling ListOffsetResponse response for egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration-0. Fetched offset 18, timestamp -1
2022-07-07 12:33:28.497 DEBUG 1 --- [pool-2-thread-1] org.apache.kafka.clients.Metadata        : [Consumer clientId=consumer-a16a8b62-31db-4f80-ba90-88312dac01b6-1, groupId=a16a8b62-31db-4f80-ba90-88312dac01b6] Not replacing existing epoch 0 with new epoch 0 for partition egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration-0
2022-07-07 12:33:28.498  INFO 1 --- [pool-2-thread-1] o.a.k.c.c.internals.SubscriptionState    : [Consumer clientId=consumer-a16a8b62-31db-4f80-ba90-88312dac01b6-1, groupId=a16a8b62-31db-4f80-ba90-88312dac01b6] Resetting offset for partition egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration-0 to position FetchPosition{offset=18, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[lab-strimzi-kafka-0.lab-strimzi-kafka-brokers.lab.svc:9092 (id: 0 rack: null)], epoch=0}}.
2022-07-07 12:33:28.550 DEBUG 1 --- [pool-2-thread-1] o.a.k.c.consumer.internals.Fetcher       : [Consumer clientId=consumer-a16a8b62-31db-4f80-ba90-88312dac01b6-1, groupId=a16a8b62-31db-4f80-ba90-88312dac01b6] Added READ_UNCOMMITTED fetch request for partition egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration-0 at position FetchPosition{offset=18, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[lab-strimzi-kafka-0.lab-strimzi-kafka-brokers.lab.svc:9092 (id: 0 rack: null)], epoch=0}} to node lab-strimzi-kafka-0.lab-strimzi-kafka-brokers.lab.svc:9092 (id: 0 rack: null)
2022-07-07 12:33:28.785 DEBUG 1 --- [pool-2-thread-1] o.a.k.c.consumer.internals.Fetcher       : [Consumer clientId=consumer-a16a8b62-31db-4f80-ba90-88312dac01b6-1, groupId=a16a8b62-31db-4f80-ba90-88312dac01b6] Fetch READ_UNCOMMITTED at offset 18 for partition egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration-0 returned fetch data PartitionData(partitionIndex=0, errorCode=0, highWatermark=19, lastStableOffset=19, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=1506, buffer=java.nio.HeapByteBuffer[pos=0 lim=1506 cap=1509]))
2022-07-07 12:33:28.787 DEBUG 1 --- [pool-2-thread-1] o.a.k.c.consumer.internals.Fetcher       : [Consumer clientId=consumer-a16a8b62-31db-4f80-ba90-88312dac01b6-1, groupId=a16a8b62-31db-4f80-ba90-88312dac01b6] Add
ed READ_UNCOMMITTED fetch request for partition egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration-0 at position FetchPosition{offset=19, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[lab-strimzi-kafka-0.lab-strimzi-kafka-brokers.lab.svc:9092 (id: 0 rack: null)], epoch=0}} to node lab-strimzi-kafka-0.lab-strimzi-kafka-brokers.lab.svc:9092 (id: 0 rack: null)
2022-07-07 12:33:28.787 DEBUG 1 --- [pool-2-thread-1] o.a.kafka.clients.FetchSessionHandler    : [Consumer clientId=consumer-a16a8b62-31db-4f80-ba90-88312dac01b6-1, groupId=a16a8b62-31db-4f80-ba90-88312dac01b6] Built incremental fetch (sessionId=1192376798, epoch=1) for node 0. Added 0 partition(s), altered 1 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s)

This co-incides with our own debugging of incoming messages

When we compare the times for cocoMDS4 which settles on offset 25 we see that our application thinks the listener is initialized around

2022-07-07 12:33:41.677 DEBUG 1 --- [nio-9443-exec-3] o.o.o.r.e.OMRSEventListener              : Initialize OMRS Event Listener

So what we're seeing here is this timing window. Whilst we think the listener is active immediately, it's not really read for many seconds later -- since it's only then the offset is figured out and set. A subtle kafka behaviour, noting we also have multiple threads at work here

This is consistent with my early theory, but provides hard evidence the issue is start offset.

Setting the offset based on the time we 'decided' to start the kafka topic connector would be better (and it doesn't matter if this is 30,60,120s later it actually takes effect). But this should only be done if there isn't a current offset. If there is we should just continue where we left off, as the server may have been restarted (or another replica is being run)

Another alternative would be to check at what point in our kafka interaction we can be sure the offset is set, and ensure no events are sent out until this point - but this would need extra synchronization logic(we have separate threads for provider/consumer etc) and likely more complex

@planetf1
Copy link
Member Author

planetf1 commented Jul 7, 2022

To review actual messages we can use

./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration --property print.timestamp=true --from-beginning

See https://gist.github.com/2ddfb6b242741ea5b6973336c53b3097 for all registration events

Looking at the REFRESH_REGISTRATION_REQUEST (coming from our producer) we see this is:

CreateTime:1657197223490        {"class":"OMRSEventV1","protocolVersionId":"OMRS V1.0","timestamp":1657197222508,"originator":{"serverName":"cocoMDS4","serverType":"Metadata Access Point","organizationName":"Coco Pharmaceuticals"},"eventCategory":"REGISTRY","registryEventSection":{"registryEventType":"REFRESH_REGISTRATION_REQUEST"}}

That time equates to Thursday, 7 July 2022 13:33:43.490 [GMT+01:00] as from https://www.epochconverter.com/timezones?q=1657197223490

the RESPONSES from the other servers in the cohort begin shortly after this ie:

CreateTime:1657197224005        {"class":"OMRSEventV1","protocolVersionId":"OMRS V1.0","timestamp":1657197223763,"originator":{"metadataCollectionId":"2f901cba-0f52-4d04-9f4e-69586f5f8d06","serverName":"cocoMDS6","serverType":"Metadata Access Store","organizationName":"Coco Pharmaceuticals"},"eventCategory":"REGISTRY","registryEventSection":{"registryEventType":"RE_REGISTRATION_EVENT","registrationTimestamp":1657197173016,"metadataCollectionName":"Manufacturing Catalog","remoteConnection":{"class":"Connection","headerVersion":0,"connectorType":{"class":"ConnectorType","headerVersion":0,"type":{"class":"ElementType","headerVersion":0,"elementOrigin":"LOCAL_COHORT","elementVersion":0,"elementTypeId":"954421eb-33a6-462d-a8ca-b5709a1bd0d4","elementTypeName":"ConnectorType","elementTypeVersion":1,"elementTypeDescription":"A set of properties describing a type of connector."},"guid":"75ea56d1-656c-43fb-bc0c-9d35c5553b9e","qualifiedName":"Egeria:OMRSRepositoryConnector:CohortMemberClient:REST","displayName":"REST Cohort Member Client Connector","description":"Cohort member client connector that provides access to open metadata located in a remote repository via REST calls.","connectorProviderClassName":"org.odpi.openmetadata.adapters.repositoryservices.rest.repositoryconnector.OMRSRESTRepositoryConnectorProvider"},"endpoint":{"class":"Endpoint","headerVersion":0,"address":"https://lab-core:9443/servers/cocoMDS6"}}}}

This is Thursday, 7 July 2022 13:33:43.763 GMT+01:00 - but since Kafka set our listener to start with an offset that was current as of 12:33:48 it's clear why we just never see them

@planetf1
Copy link
Member Author

planetf1 commented Jul 8, 2022

Currently testing a fix...

@planetf1
Copy link
Member Author

planetf1 commented Jul 8, 2022

On testing, I found that when I tried to query the allocated partitions (so I could validate/set the offset), the set was empty.

   KafkaConsumer consumer;
   .....
    Set<TopicPartition> partitions = consumer.assignment();

The reason is that automatic partition assignment does not occur until after the first poll() on a topic. Prior to kafka 2.4 a poll(0) could be used to force a metadata refresh without consuming any events. However this is not possible in later versions of Kafka.

The two options are therefore to either

  • do the poll(), save records, adjust offset, skip over already processed records
  • use rebalanceListener.onPartitionAssignment() which is guaranteed to be called a) after partition assignment b) before any messages are read

@planetf1
Copy link
Member Author

planetf1 commented Jul 8, 2022

Updated fix now seems to work - raising PR

ie for the same scenario we now do a rewind to go back to the desired start time - and find those missing registration events. The result is that cohort membership is now correct for cocoMDS4

2022-07-08 12:12:23.901  INFO 1 --- [pool-2-thread-1] o.a.e.t.k.KafkaOpenMetadataEventConsumer : Received initial rebalance event
2022-07-08 12:12:23.937  INFO 1 --- [pool-2-thread-1] o.a.e.t.k.KafkaOpenMetadataEventConsumer : Seeking to 19 for partition 0 and topic egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration as current offset 25 is too late
2022-07-08 12:12:23.973 DEBUG 1 --- [pool-2-thread-1] o.a.e.t.k.KafkaOpenMetadataEventConsumer : Found records: 6
2022-07-08 12:12:23.973 DEBUG 1 --- [pool-2-thread-1] o.a.e.t.k.KafkaOpenMetadataEventConsumer : Received message: {"class":"OMRSEventV1","protocolVersionId":"OMRS V1.0","timestamp":1657282337878,"originator":{"serverName":"cocoMDS4","serverType":"Metadata Access Point","organizationName":"Coco Pharmaceuticals"},"eventCategory":"REGISTRY","registryEventSection":{"registryEventType":"REFRESH_REGISTRATION_REQUEST"}}

planetf1 added a commit to planetf1/egeria that referenced this issue Jul 8, 2022
Signed-off-by: Nigel Jones <nigel.l.jones+git@gmail.com>
planetf1 added a commit to planetf1/egeria that referenced this issue Jul 8, 2022
Signed-off-by: Nigel Jones <nigel.l.jones+git@gmail.com>
planetf1 added a commit to planetf1/egeria that referenced this issue Jul 11, 2022
Signed-off-by: Nigel Jones <nigel.l.jones+git@gmail.com>
planetf1 added a commit to planetf1/egeria that referenced this issue Jul 11, 2022
…ts on topic ->null

Signed-off-by: Nigel Jones <nigel.l.jones+git@gmail.com>
planetf1 added a commit to planetf1/egeria that referenced this issue Jul 11, 2022
Signed-off-by: Nigel Jones <nigel.l.jones+git@gmail.com>
planetf1 added a commit to planetf1/egeria that referenced this issue Jul 11, 2022
Signed-off-by: Nigel Jones <nigel.l.jones+git@gmail.com>
planetf1 added a commit to planetf1/egeria that referenced this issue Jul 11, 2022
Signed-off-by: Nigel Jones <nigel.l.jones+git@gmail.com>
planetf1 added a commit to planetf1/egeria that referenced this issue Jul 11, 2022
Signed-off-by: Nigel Jones <nigel.l.jones+git@gmail.com>
planetf1 added a commit to planetf1/egeria that referenced this issue Jul 11, 2022
…ts on topic ->null

Signed-off-by: Nigel Jones <nigel.l.jones+git@gmail.com>
planetf1 added a commit to planetf1/egeria that referenced this issue Jul 11, 2022
Signed-off-by: Nigel Jones <nigel.l.jones+git@gmail.com>
planetf1 added a commit to planetf1/egeria that referenced this issue Jul 11, 2022
Signed-off-by: Nigel Jones <nigel.l.jones+git@gmail.com>
planetf1 added a commit that referenced this issue Jul 11, 2022
#6549 Fix kafka consumer initial seek position
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage New bug/issue which needs checking & assigning
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants