Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Drive reindex using client side checkpointing for improved throughput #2195

Closed
punktilious opened this issue Apr 2, 2021 · 12 comments
Closed
Assignees
Labels
enhancement New feature or request P2 Priority 2 - Should Have showcase Used to Identify End-of-Sprint Demos

Comments

@punktilious
Copy link
Collaborator

punktilious commented Apr 2, 2021

Is your feature request related to a problem? Please describe.
The current $reindex operation uses the reindex_tstamp column in LOGICAL_RESOURCES for selecting which resources to process for a particular thread. This selection process involves updating the column and using database-specific techniques to avoid concurrency issues from the resulting row-locks.

In PostgreSQL, this update leaves "tombstone" markers in the blocks which only get cleaned when the table is next vacuumed. If vacuuming is not aggressive enough, $reindex slows significantly due to the extra index blocks being scanned every time the request processor attempts to acquire a new resource to process.

Describe the solution you'd like
Although using the reindex_tstamp simplifies the client needed to drive the reindex operation (it can be as simple as a shell script running curl in a loop), better throughput could be achieved by avoid the update statement and instead tracking progress (checkpointing) with a more sophisticated client implementation.

  1. Implement a new API endpoint to provide a list of logical_resource_ids in increasing order (similar to the whole system history operation, but ignoring multiple versions). Each request would include an "afterLogicalResourceId=xxx" parameter, and _count can be used to limit the number of resources selected each time. Only the logical_resource_id values would need to be returned, not the resources.
  2. The client can package up a list of logical resources and submit to a thread pool for reindexing.
  3. Support new reindex parameters allowing the client to specify which logical_resource_ids (or possibly resourceType/logicalId). The current parameter block allows only a single resource to be specified.
  4. The client keeps track of the max contiguous logical_resource_id which has been successfully processed (non-trivial, but implementable with care)
  5. Client can checkpoint this value by writing a file locally so it can be restarted from a known point if necessary.
  6. As soon as the batch of logical_resource_ids is submitted to the thread pool, the main thread can make another request for the next batch
  7. The client can ask for a large number of ids in one go and break them into smaller batches for processing in parallel.

Check that the resulting throughput is greater than the current reindex operation (which can be kept), and that its throughput doesn't slow over time due to delayed vacuum of the logical_resources table

@prb112 prb112 added the enhancement New feature or request label Apr 5, 2021
@lmsurpre lmsurpre added the P2 Priority 2 - Should Have label May 24, 2021
@lmsurpre lmsurpre added this to the Sprint 2021-08 milestone Jun 1, 2021
@tbieste tbieste self-assigned this Jun 1, 2021
@tbieste
Copy link
Contributor

tbieste commented Jun 2, 2021

Sub-task 1:
-- Get resource IDs (in order), so a client can call the $reindex operation passing in the resource type+Id of resources to reindex.

New Operation: $list-index, or $index, or $reindex-list, etc.
-- Similar to whole system history, but only gets the current version of resources

Input parameters:
-- _afterLogicalResourceId (optional): similar to _afterHistoryId for history API
-- _count (optional): similar to _count for history API

Output:
Bundle with the following bundle entry fields filled in:
-- fullUrl (resource type+id)
-- request.method (POST/PUT/DELETE) <--- So client can skip deleted resources???
-- request.uri (resource type+id)
-- response.status (200, 201)
-- lastModified (reindex_tstamp???) <--- So client can skip resources reindexed after reindex_tstamp???

Note:
At this point, client can use the returned list of resource type+id, and filter out deleted resources and resources reindexed after a desired tstamp.
Client can build lists of resource type+id, and in other threads, call $reindex with that list.

Sub-task 2:
-- Update existing $reindex operation with ability to pass in a list of logical resource IDs.

--Add a new "resourceLogicalIds" parameter that takes resource types+ids as a comma-delimited string. If both "resourceLogicalIds" and "resourceLogicalId" parameters are specified, then either error.

Sub-task 3:
-- Drive testing of this function. Add tests for the operations, and perhaps update fhir-bucket as well.

@prb112
Copy link
Contributor

prb112 commented Jun 2, 2021

Any thought of adding a type parameter instead of adding another endpoint? also prefixing _afterLogicalResourceId (no need to prefix)

@lmsurpre
Copy link
Member

lmsurpre commented Jun 3, 2021

Any thought of adding a type parameter instead of adding another endpoint? also prefixing _afterLogicalResourceId (no need to prefix)

A similar thought I had on sub-task 1 is that the list of resources you want is basically equivalent to a whole-system search with _total=none, but with no resource contents in the output. I opened a related feature request at #2027 and so I'd vote to get that implemented and make our standard search with no parameters (and system-defined sort) blazing fast.

-- request.method (POST/PUT/DELETE) <--- So client can skip deleted resources???

seems like we just shouldn't list deleted resources in the output of this operation

@punktilious
Copy link
Collaborator Author

-- request.method (POST/PUT/DELETE) <--- So client can skip deleted resources???

I don't think the operation should return deleted resources - they never need to be reindexed.

-- lastModified (reindex_tstamp???) <--- So client can skip resources reindexed after reindex_tstamp???

This mode of reindex won't be updating the reindex_tstamp, so this field isn't that useful.

Note that the call to actually perform the reindex should support a list of resources, not just a single resource. This will make the calls a little more efficient (perhaps 50 at a time, which can be processed inside a single transaction).

@tbieste
Copy link
Contributor

tbieste commented Jun 3, 2021

-- lastModified (reindex_tstamp???) <--- So client can skip resources reindexed after reindex_tstamp???

This mode of reindex won't be updating the reindex_tstamp, so this field isn't that useful.

Note that the call to actually perform the reindex should support a list of resources, not just a single resource. This will make the calls a little more efficient (perhaps 50 at a time, which can be processed inside a single transaction).

Right, since $reindex would now accept a list of resources, I was thinking if the client knew the reindex tstamp of each resource, it could skip over resources that it knows have been reindexed after tstamp (the parameter passed into $reindex) when it builds the list of resources to pass into $reindex. I thought that would be useful.

@tbieste
Copy link
Contributor

tbieste commented Jun 3, 2021

Ah, ok. I now understand the distinction between logical_resource_ids and logical_id. I see a couple options:

  1. Use/enhance existing search API and use that to obtain the resource_ids in order, and have $reindex accept a list of resourcetype+rsrcid.
  2. Create a new endpoint to just get the logical_resource_ids for $reindex, and have $reindex accept a list of logical_resource_ids.

I think option 2 makes the most sense by having a pair of endpoints ($reindex and $list-index) that are used together, especially in case there is special metadata, such as the logicial_resource_ids, that really don't fit well in the output Bundle from a normal search. Option 2 feels cleaner to me for this purpose.

tbieste added a commit that referenced this issue Jun 3, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 3, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 4, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 4, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 4, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 4, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 4, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 4, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 4, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 9, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 9, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 9, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 10, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 10, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 10, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 11, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 11, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 11, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 11, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 28, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 28, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 28, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 28, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 28, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 28, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 28, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 28, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 28, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 28, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jun 28, 2021
Issue #2195 - Enable client checkpoint driven reindex
tbieste added a commit that referenced this issue Jul 1, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jul 1, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jul 1, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
tbieste added a commit that referenced this issue Jul 1, 2021
Issue #2195 - Have client-driven reindex exit if no work to do
@lmsurpre
Copy link
Member

lmsurpre commented Jul 12, 2021

I re-opened #1822 for some ongoing pain with reindexing large databases, but otherwise this seems to be working.

Currently, the $reindex implementation is logging one message per resource:

2021-07-12 00:44:28.762 fhir-test-server INFO Reindexing FHIR Resource 'Observation/17484449db6-e4fca274-2e4e-41b9-b6ac-9bef39e41e0b'
2021-07-12 00:44:28.762 fhir-test-server INFO Reindexing FHIR Resource 'Organization/174840c1a81-4b321742-3a9e-49f0-926a-9b68dce26ecf'
2021-07-12 00:44:28.762 fhir-test-server INFO Reindexing FHIR Resource 'Observation/1747d28a560-7804219c-a9ee-456e-8a92-6d0516920cd8'
...

For large reindex jobs, this get very verbose. For example, the fhir-bucket client only logs one message per request (and the default is like 50 resources in a single request). What we'd like is:

  1. reduce the log level of these messages to FINE
  2. ensure that we log the resource type + resource id of the resource whenever we fail to reindex one
  3. optionally, if simple, log a message or two at level INFO for each request. this could be just the list of indexIds that the client has requested to be indexed, or possibly an overview with the count of resources by resource type. For example:
    • Reindex requested for 1234,1236,1238,1240,1242,1244,1246,1248,1250,1252
    • Reindex completed (1 Patient, 1 Practitioner, 8 Observation)

tbieste added a commit that referenced this issue Jul 12, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
@lmsurpre
Copy link
Member

lmsurpre commented Jul 12, 2021

I think we should support $retrieve-index over either GET or POST, but I only seem to be able to invoke it via POST.

Even if we chose not to support it (which I think would be wrong), today it comes back with a 500 internal server error whereas it should be a 400 or 405.
GET [base]?afterIndexId=355501804&_count=50

{
    "resourceType": "OperationOutcome",
    "id": "ac-1e-b3-2c-69820fe4-66fd-465e-995a-69fc34722414",
    "issue": [
        {
            "severity": "fatal",
            "code": "exception",
            "details": {
                "text": "FHIROperationException: HTTP method not supported: GET"
            }
        }
    ]
}

tbieste added a commit that referenced this issue Jul 12, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
@tbieste
Copy link
Contributor

tbieste commented Jul 12, 2021

Added support for GET.

tbieste added a commit that referenced this issue Jul 12, 2021
Signed-off-by: Troy Biesterfeld <tbieste@us.ibm.com>
@lmsurpre
Copy link
Member

lmsurpre commented Jul 13, 2021

the updated "Reindexing was completed" logic looks good:

2021-07-13 03:19:01.371 00000024    INFO x.ClientDrivenReindexOperation Waiting for 60 threads to complete before exiting
2021-07-13 03:19:06.372 00000024    INFO x.ClientDrivenReindexOperation Reindexing was completed
2021-07-13 03:20:04.224 00000023    INFO   com.ibm.fhir.bucket.app.Main Stopping all services

lmsurpre pushed a commit that referenced this issue Jul 13, 2021
Signed-off-by: Lee Surprenant <lmsurpre@us.ibm.com>
@lmsurpre
Copy link
Member

I confirmed that $retrieve-index is now retrievable via GET.

@lmsurpre
Copy link
Member

I alos verified that, when the reindex operation fails, we now get a log message that more clearly indicates on which resource we have failed. For example:

2021-07-13 00:08:00.451 fhir-test-server-6fc67f88-k4phd fhir-test-server SEVERE Non-retryable error while performing reindex of FHIR Resource 'Condition/1761a89e641-78345552-ba2c-4874-b9aa-3602c96355ec'
com.ibm.fhir.database.utils.api.DataAccessException: org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
	at com.ibm.fhir.database.utils.postgres.PostgresTranslator.translate(PostgresTranslator.java:104)
	at com.ibm.fhir.persistence.jdbc.postgres.PostgresResourceReferenceDAO.doCommonTokenValuesUpsert(PostgresResourceReferenceDAO.java:143)
	at com.ibm.fhir.persistence.jdbc.dao.impl.ResourceReferenceDAO.upsertCommonTokenValues(ResourceReferenceDAO.java:710)
	at com.ibm.fhir.persistence.jdbc.dao.impl.ResourceReferenceDAO.persist(ResourceReferenceDAO.java:786)
	at com.ibm.fhir.persistence.jdbc.dao.impl.ResourceReferenceDAO.addNormalizedValues(ResourceReferenceDAO.java:182)
	at com.ibm.fhir.persistence.jdbc.dao.impl.ParameterVisitorBatchDAO.close(ParameterVisitorBatchDAO.java:626)
	at com.ibm.fhir.persistence.jdbc.dao.ReindexResourceDAO.updateParameters(ReindexResourceDAO.java:354)
	at com.ibm.fhir.persistence.jdbc.impl.FHIRPersistenceJDBCImpl.updateParameters(FHIRPersistenceJDBCImpl.java:2643)
	at com.ibm.fhir.persistence.jdbc.impl.FHIRPersistenceJDBCImpl.reindex(FHIRPersistenceJDBCImpl.java:2572)
	at com.ibm.fhir.server.util.FHIRRestHelper.doReindex(FHIRRestHelper.java:3101)
	at com.ibm.fhir.operation.reindex.ReindexOperation.doInvoke(ReindexOperation.java:169)
	at com.ibm.fhir.server.operation.spi.AbstractOperation.invoke(AbstractOperation.java:66)
	at com.ibm.fhir.server.util.FHIRRestHelper.doInvoke(FHIRRestHelper.java:1165)
	at com.ibm.fhir.server.resources.Operation.invoke(Operation.java:126)
	at com.ibm.fhir.server.resources.Operation$Proxy$_$$_WeldClientProxy.invoke(Unknown Source)
	at jdk.internal.reflect.GeneratedMethodAccessor58.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at com.ibm.ws.jaxrs20.cdi.component.JaxRsFactoryImplicitBeanCDICustomizer.serviceInvoke(JaxRsFactoryImplicitBeanCDICustomizer.java:342)
	at com.ibm.ws.jaxrs20.server.LibertyJaxRsServerFactoryBean.performInvocation(LibertyJaxRsServerFactoryBean.java:641)
	at com.ibm.ws.jaxrs20.server.LibertyJaxRsInvoker.performInvocation(LibertyJaxRsInvoker.java:160)
	at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:101)
	at com.ibm.ws.jaxrs20.server.LibertyJaxRsInvoker.invoke(LibertyJaxRsInvoker.java:273)
	at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:213)
	at com.ibm.ws.jaxrs20.server.LibertyJaxRsInvoker.invoke(LibertyJaxRsInvoker.java:444)
	at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:112)
	at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59)
	at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96)
	at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308)
	at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:123)
	at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:274)
	at com.ibm.ws.jaxrs20.endpoint.AbstractJaxRsWebEndpoint.invoke(AbstractJaxRsWebEndpoint.java:137)
	at com.ibm.websphere.jaxrs.server.IBMRestServlet.handleRequest(IBMRestServlet.java:146)
	at com.ibm.websphere.jaxrs.server.IBMRestServlet.doPost(IBMRestServlet.java:104)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:706)
	at com.ibm.websphere.jaxrs.server.IBMRestServlet.service(IBMRestServlet.java:96)
	at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1253)
	at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:746)
	at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:443)
	at com.ibm.ws.webcontainer.filter.WebAppFilterChain.invokeTarget(WebAppFilterChain.java:183)
	at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:94)
	at com.ibm.fhir.server.filter.rest.FHIRRestServletFilter.doFilter(FHIRRestServletFilter.java:155)
	at javax.servlet.http.HttpFilter.doFilter(HttpFilter.java:127)
	at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:201)
	at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:91)
	at com.ibm.ws.security.jaspi.JaspiServletFilter.doFilter(JaspiServletFilter.java:56)
	at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:201)
	at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:91)
	at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:1002)
	at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1140)
	at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:5061)
	at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.handleRequest(DynamicVirtualHost.java:314)
	at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:1007)
	at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279)
	at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1159)
	at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.wrapHandlerAndExecute(HttpDispatcherLink.java:428)
	at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.ready(HttpDispatcherLink.java:387)
	at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:566)
	at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleNewRequest(HttpInboundLink.java:500)
	at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.processRequest(HttpInboundLink.java:360)
	at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.ready(HttpInboundLink.java:327)
	at com.ibm.ws.channel.ssl.internal.SSLConnectionLink.determineNextChannel(SSLConnectionLink.java:1100)
	at com.ibm.ws.channel.ssl.internal.SSLConnectionLink$MyReadCompletedCallback.complete(SSLConnectionLink.java:675)
	at com.ibm.ws.channel.ssl.internal.SSLReadServiceContext$SSLReadCompletedCallback.complete(SSLReadServiceContext.java:1824)
	at com.ibm.ws.tcpchannel.internal.WorkQueueManager.requestComplete(WorkQueueManager.java:504)
	at com.ibm.ws.tcpchannel.internal.WorkQueueManager.attemptIO(WorkQueueManager.java:574)
	at com.ibm.ws.tcpchannel.internal.WorkQueueManager.workerRun(WorkQueueManager.java:958)
	at com.ibm.ws.tcpchannel.internal.WorkQueueManager$Worker.run(WorkQueueManager.java:1047)
	at com.ibm.ws.threading.internal.ExecutorServiceImpl$RunnableWrapper.run(ExecutorServiceImpl.java:238)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:836)
Caused by: org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2553)
	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2285)
	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:323)
	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:473)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:393)
	at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:164)
	at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:130)
	at jdk.internal.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.postgresql.ds.PGPooledConnection$StatementHandler.invoke(PGPooledConnection.java:441)
	at com.sun.proxy.$Proxy117.executeUpdate(Unknown Source)
	at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.executeUpdate(WSJdbcPreparedStatement.java:520)
	at com.ibm.fhir.persistence.jdbc.postgres.PostgresResourceReferenceDAO.doCommonTokenValuesUpsert(PostgresResourceReferenceDAO.java:140)
	... 70 more

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request P2 Priority 2 - Should Have showcase Used to Identify End-of-Sprint Demos
Projects
None yet
Development

No branches or pull requests

4 participants