Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Storage-MQ listen to DatasourceImportSuccess Event #331

Open
georg-schwarz opened this issue Apr 22, 2021 · 2 comments
Open

Storage-MQ listen to DatasourceImportSuccess Event #331

georg-schwarz opened this issue Apr 22, 2021 · 2 comments

Comments

@georg-schwarz
Copy link
Member

Datsource imports should be persistent without pipeline execution.

@f3l1x98
Copy link
Contributor

f3l1x98 commented Nov 11, 2021

@georg-schwarz I am not sure if I understand this correctly, but isn't the data already saved in the adapter?

DataImport.MetaData executeImport(Long id, RuntimeParameters runtimeParameters)
throws DatasourceNotFoundException, ImporterParameterException, InterpreterParameterException, IOException {
Datasource datasource = getDatasource(id);
DataImport dataImport = new DataImport(datasource, "", ValidationMetaData.HealthStatus.FAILED);
Validator validator = new JsonSchemaValidator();
try {
AdapterConfig adapterConfig = datasource.toAdapterConfig(runtimeParameters);
DataImportResponse executionResult = adapter.executeJob(adapterConfig);
String responseData = executionResult.getData();
dataImport = new DataImport(datasource, responseData);
dataImport.setValidationMetaData(validator.validate(dataImport));
DataImport savedDataImport = dataImportRepository.save(dataImport);
amqpPublisher.publishImportSuccess(id, savedDataImport.getData());
return savedDataImport.getMetaData();
} catch (ImporterParameterException | InterpreterParameterException | IOException e) {
dataImport.setErrorMessages(new String[] { e.getMessage() });
handleImportFailed(datasource, dataImport, e);
throw e;
}
}

@georg-schwarz
Copy link
Member Author

Yes it is, but the adapter does not offer query functionality on the data. What if the user does not need a pipeline?

Right now the Storage service is not there yet, but the idea is everything in the storage-db will be queriable via GraphQL or a RESTful HTTP API.

If we want that, there are two options:
(1) Create a default pipeline for every datasource, so that it is persisted in the storage db
(2) The storage-mq also supports storing data from the datasource (adapter) service

This ticket describes the second option. But I guess is not high priority right now and should be discussed, if we really want this to be implemented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants