Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SWATCH-1622 Resolve incoming Event conflicts and amend differences #3172

Merged
merged 9 commits into from
May 7, 2024

Conversation

mstead
Copy link
Contributor

@mstead mstead commented Mar 21, 2024

Jira issue: SWATCH-1622

Description

When processing incoming Events from either the topic or Admin API, we now check if there are any events that are in conflict, and resolve them when required.

A conflict occurs when there is an existing event that matches the EventKey (org_id, event_type, event_source, instance_id, timestamp) of the incoming event. In the case of a match, conflict resolution must occur.

Example Conflict/Resolution scenarios include:

  1. No Conflicts
  • Incoming event is valid and no resolution required.
  • An EventRecord is created from the incoming Event.
  1. Conflicting event with equal measurements
  • Incoming Event is ignored.
  • Existing EventRecord remains as is with no updates.
  1. Conflicting event with different measurements
  • EventRecord added with a negative accumulative measurement value (resetting the current the total for the timestamp to 0).
  • Incoming EventRecord measurements reflect the values at this timestamp.
  • e.g. An incoming event has a measurement of 10 cores. An existing EventRecord matching the tuple has a measurement of 20 cores. An EventRecord would be created with a cores value of -10 and another EventRecord created to apply the new value of 20 cores.

NOTES:

  1. Removed the CleanupEvent

We no longer need to send/process cleanup events since the EventConflictResolver will address any existing events so that they are reflected when tallied.

How To Test

The following demonstrates how to test this functionality using smqe-tools, however, if you'd prefer testing with a client tool such as curl can be accomplished by adding the events using the internal API: POST /v1/internal/rpc/tally/events

Deploy the following components

  1. swatch-metrics
QUARKUS_MANAGEMENT_ENABLED=false SERVER_PORT=8882 PROM_URL=http://localhost:8101/api/v1 ENABLE_SYNCHRONOUS_OPERATIONS=true ./gradlew :swatch-metrics:quarkusDev
  1. swatch-tally
SPRING_PROFILES_ACTIVE=api,worker,kafka-queue DEV_MODE=true ./gradlew :bootRun
  1. swatch-contracts
QUARKUS_MANAGEMENT_ENABLED=false SERVER_PORT=8001 ./gradlew :swatch-contracts:quarkusDev

Set up smqe-tools

Follow the HOWTO steps to install smqe-tools.

Make sure that you:
Set up the correct paths to the services you will be deploying

cat <<EOF > local.yaml
local:
  swatch_conduit: "http://localhost:9000"
  swatch_tally: "http://localhost:8000"
  swatch_contracts: "http://localhost:9090"
  swatch_metrics: "http://localhost:8882"
  swatch_psk: "placeholder"
EOF

Activate the virtual env (ensure correct env path)

. .env/bin/activate

Set the proper dynaconf env

export ENV_FOR_DYNACONF=local

Enable some utility functions

Create the file event_amendment_utils.py in the smqe-tools check out directory. This file provides a couple of utility functions to wrapping logic for this test process.

cat <<EOF > event_amendment_utils.py
import datetime
import json
from datetime import timedelta
from smqe_tools import logging, create_payg_data, fetch_events_in_range, datetime_to_iso8601_format

# Use the internal API to fetch existing events and print them
def print_existing_event_measurements(org_id, start, end):
    event_resp = fetch_events_in_range(
        org_id, begin_time=datetime_to_iso8601_format(start.replace(minute=0, second=0, microsecond=0)),
        end_time=datetime_to_iso8601_format(end.replace(minute=0, second=0, microsecond=0)))
    
    events = json.loads(event_resp['detail'])
    logging.info(f"Found {len(events)} events")

    for event in sorted(events, key=lambda e: (e['timestamp'], e['record_date'])):
        measurement = event['measurements'][0]
        logging.info(f"PRODUCT_TAG: {event['product_tag']} TIMESTAMP: {event['timestamp']} RECORD_DATE: {event['record_date']} INSTANCE_ID: {event['instance_id']} UOM: {measurement['uom']} VALUE: {measurement['value']}")

# Create a new Event via the internal admin API, and print
# the event list if specified.
def add_event(org_id, account, product_id, start=None, print_events=False, **kwargs):
    current_hour_start = start
    if not current_hour_start:
        current_hour_start = datetime.datetime.now(datetime.timezone.utc).replace(minute=0, second=0, microsecond=0) - timedelta(hours=1)

    instance_id = create_payg_data(org_id, account, product_id=product_id, start_time=current_hour_start, **kwargs)

    if print_events:
        current_hour_end = current_hour_start + timedelta(hours=1)
        print_existing_event_measurements(org_id, current_hour_start, current_hour_end)

    return instance_id
EOF

Test Amendment logic

Launch the python interpreter

$ python
Python 3.10.11 (main, Apr  5 2023, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 

Run the following commands.

  1. Load the add_event util.
# load the add_event util
from event_amendment_utils import add_event
  1. Import the initial event for the current hour, with an Instance-hours (added by default) and Cores value of 1.0. The add_event function will import an Event via the API and print the current events after the import is complete.
uuid = add_event('myorg', 'myaccount', 'rosa', print_events=True, cores=1.0)

You should see that there were 2 events added for a single host instance.

INFO:root:Found 2 events
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-19T17:00:00Z RECORD_DATE: 2024-03-19T18:11:43.903047488Z INSTANCE_ID: 7a9d56a9-da02-434b-ae85-3a8b13322569 UOM: Instance-hours VALUE: 1.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-19T17:00:00Z RECORD_DATE: 2024-03-19T18:11:43.926189424Z INSTANCE_ID: 7a9d56a9-da02-434b-ae85-3a8b13322569 UOM: Cores VALUE: 1.0

  1. Repeat the add_event call, but this time be sure to sub the uuid in for instance_id so that apply a new cores value of 10.0 to the host for the current hour.
add_event('myorg', 'myaccount', 'rosa', instance_uuid=uuid, print_events=True, cores=10.0)

You should see that the existing Instance-hour event was considered to be a duplicate, and is not changed. However, the new Cores measurement triggers an amendment. This amendment results in 1 deduction event for the current total (-1), and a new event representing the new incoming value (10.0).

INFO:root:Found 4 events
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-19T17:00:00Z RECORD_DATE: 2024-03-19T18:11:43.903047488Z INSTANCE_ID: 7a9d56a9-da02-434b-ae85-3a8b13322569 UOM: Instance-hours VALUE: 1.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-19T17:00:00Z RECORD_DATE: 2024-03-19T18:11:43.926189424Z INSTANCE_ID: 7a9d56a9-da02-434b-ae85-3a8b13322569 UOM: Cores VALUE: 1.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-19T17:00:00Z RECORD_DATE: 2024-03-19T18:23:23.050153932Z INSTANCE_ID: 7a9d56a9-da02-434b-ae85-3a8b13322569 UOM: Cores VALUE: -1.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-19T17:00:00Z RECORD_DATE: 2024-03-19T18:23:23.058884603Z INSTANCE_ID: 7a9d56a9-da02-434b-ae85-3a8b13322569 UOM: Cores VALUE: 10.0
  1. Attempt to add events with the exact same values. Doing this should result in NO change.
add_event('myorg', 'myaccount', 'rosa', instance_uuid=uuid, print_events=True, cores=10.0)
INFO:root:Found 4 events
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-19T17:00:00Z RECORD_DATE: 2024-03-19T18:11:43.903047488Z INSTANCE_ID: 7a9d56a9-da02-434b-ae85-3a8b13322569 UOM: Instance-hours VALUE: 1.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-19T17:00:00Z RECORD_DATE: 2024-03-19T18:11:43.926189424Z INSTANCE_ID: 7a9d56a9-da02-434b-ae85-3a8b13322569 UOM: Cores VALUE: 1.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-19T17:00:00Z RECORD_DATE: 2024-03-19T18:23:23.050153932Z INSTANCE_ID: 7a9d56a9-da02-434b-ae85-3a8b13322569 UOM: Cores VALUE: -1.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-19T17:00:00Z RECORD_DATE: 2024-03-19T18:23:23.058884603Z INSTANCE_ID: 7a9d56a9-da02-434b-ae85-3a8b13322569 UOM: Cores VALUE: 10.0
  1. Update the cores value to another value.
add_event('myorg', 'myaccount', 'rosa', instance_uuid=uuid, print_events=True, cores=20.0)

You should see that the existing Instance-hour event was considered to be a duplicate, and is not changed. However, the new Cores measurement triggers an amendment. This amendment results in another deduction event for the current value (-10.0), and a new event representing the new incoming value (20.0).

INFO:root:Found 6 events
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-19T17:00:00Z RECORD_DATE: 2024-03-19T18:11:43.903047488Z INSTANCE_ID: 7a9d56a9-da02-434b-ae85-3a8b13322569 UOM: Instance-hours VALUE: 1.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-19T17:00:00Z RECORD_DATE: 2024-03-19T18:11:43.926189424Z INSTANCE_ID: 7a9d56a9-da02-434b-ae85-3a8b13322569 UOM: Cores VALUE: 1.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-19T17:00:00Z RECORD_DATE: 2024-03-19T18:23:23.050153932Z INSTANCE_ID: 7a9d56a9-da02-434b-ae85-3a8b13322569 UOM: Cores VALUE: -1.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-19T17:00:00Z RECORD_DATE: 2024-03-19T18:23:23.058884603Z INSTANCE_ID: 7a9d56a9-da02-434b-ae85-3a8b13322569 UOM: Cores VALUE: 10.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-19T17:00:00Z RECORD_DATE: 2024-03-19T18:33:00.597537566Z INSTANCE_ID: 7a9d56a9-da02-434b-ae85-3a8b13322569 UOM: Cores VALUE: -10.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-19T17:00:00Z RECORD_DATE: 2024-03-19T18:33:00.597935474Z INSTANCE_ID: 7a9d56a9-da02-434b-ae85-3a8b13322569 UOM: Cores VALUE: 20.0
  1. Change the Cores value for the current hour back to 1.0
add_event('myorg', 'myaccount', 'rosa', instance_uuid=uuid, print_events=True, cores=1.0)

You should see that the existing Instance-hour event was considered to be a duplicate, and is not changed. However, the new Cores measurement triggers an amendment. This amendment results in another deduction event for the current value (-20.0), and a new event representing the new incoming value (1.0).

INFO:root:Found 8 events
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-21T18:00:00Z RECORD_DATE: 2024-03-21T19:24:41.613157786Z INSTANCE_ID: 0d964b49-c1d1-4d99-a541-aa37ceb675ad UOM: Instance-hours VALUE: 1.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-21T18:00:00Z RECORD_DATE: 2024-03-21T19:24:41.635678891Z INSTANCE_ID: 0d964b49-c1d1-4d99-a541-aa37ceb675ad UOM: Cores VALUE: 1.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-21T18:00:00Z RECORD_DATE: 2024-03-21T19:25:03.96554977Z INSTANCE_ID: 0d964b49-c1d1-4d99-a541-aa37ceb675ad UOM: Cores VALUE: -1.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-21T18:00:00Z RECORD_DATE: 2024-03-21T19:25:03.96742974Z INSTANCE_ID: 0d964b49-c1d1-4d99-a541-aa37ceb675ad UOM: Cores VALUE: 10.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-21T18:00:00Z RECORD_DATE: 2024-03-21T19:25:39.085448494Z INSTANCE_ID: 0d964b49-c1d1-4d99-a541-aa37ceb675ad UOM: Cores VALUE: -10.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-21T18:00:00Z RECORD_DATE: 2024-03-21T19:25:39.086414234Z INSTANCE_ID: 0d964b49-c1d1-4d99-a541-aa37ceb675ad UOM: Cores VALUE: 20.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-21T18:00:00Z RECORD_DATE: 2024-03-21T19:26:40.932096469Z INSTANCE_ID: 0d964b49-c1d1-4d99-a541-aa37ceb675ad UOM: Cores VALUE: -20.0
INFO:root:PRODUCT_TAG: ['rosa'] TIMESTAMP: 2024-03-21T18:00:00Z RECORD_DATE: 2024-03-21T19:26:40.932973676Z INSTANCE_ID: 0d964b49-c1d1-4d99-a541-aa37ceb675ad UOM: Cores VALUE: 1.0

@mstead mstead added QE Pull request should be approved by QE before merge Dev Pull requests that need developer review labels Mar 21, 2024
@san7ket san7ket self-assigned this Mar 22, 2024
Copy link
Contributor

@Sgitario Sgitario left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm really happy to remove the clean up events, great work!

@@ -22,10 +22,16 @@

import static org.hibernate.jpa.HibernateHints.HINT_FETCH_SIZE;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "deleteStaleEvents" method needs to be removed as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DONE - next push.

* @param keys the {@link EventKey} to match on.
* @return a list of conflicting events
*/
default List<EventRecord> findConflictingEvents(List<EventKey> keys) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main concern here is not about locating huge data in memory since I don't expect too many conflicting events, but about the performance and the number of memory allocations.
With the solution in place, we're doing:
1.- List<Event> eventsToResolve: keep in memory the batch of events we receive from the topic or the admin API (let's say we process 500 events)
2.- Map<EventKey, List<EventRecord>> allConflicting: run a single query to find all the events by event key (I don't expect too much data, but in the worst scenario, we might need to allocate unlimited events by event key which is a risk).
3.- List<EventRecord> resolvedEvents: the actual events to be persisted.
4.- loop over the eventsToResolve list and for every item:
4.1.- create again the EventKey class
4.2.- check whether the event key exists in allConflicting
4.2.1.1.- if not exists, just add the event into resolvedEvents
4.2.2.1.- if exists, Map<String, Double> deductions: aggregate the metrics with the same event key,
4.2.2.2.- if no deductions, just add the event into resolvedEvents
4.2.2.3.- if there are deductions, add the new event with the deduction for the metric into resolvedEvents
4.2.2.4.- if the event has other metrics with no deductions, add new events only for these metrics into resolvedEvents

Note that from the steps 1, 2, 3; we're keeping almost the same data duplicated in memory, plus the performance penalty of creating the event key.

I suggested changing the above by:

1.- as suggested by another comment, we can directly use Map<EventKey, List<Event>> eventsToResolve as input
2.- instead of using Map<EventKey, List<EventRecord>> allConflicting which can keep unlimited data in memory, modify the query to return Map<EventKey, Map<String, Double>> eventsByDeduction. Note that I tried to do this and it worked fine with something like:

select org_id, event_type, event_source, instance_id, timestamp, measurements->>'uom' as uom, sum(cast(measurements->>'value' as double precision)) as total_value
from (
select org_id, event_type, event_source, instance_id, timestamp, jsonb_array_elements(data->'measurements') as measurements
from events
where (org_id, event_type, event_source, instance_id, timestamp) in (%s)
) a
group by org_id, event_type, event_source, instance_id, timestamp, measurements->>'uom'

This way, the logic within the step 4. is much simpler and performant since we don't need to perform any aggregation (the database is doing it for us).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I chatted with Kevin about this suggestion a little while ago and we both agreed that at some point we may need to a little more than just aggregation with conflicting events (i.e event_source prioritization). This is a valid concern, though, I'm not sure it would be very common to have many conflicting events. I DO think it would be nice to have this solution in our back pocket should we need it.

I'm happy to have a chat if you feel strongly for this change though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@Sgitario Sgitario self-assigned this Mar 22, 2024
@mstead
Copy link
Contributor Author

mstead commented Mar 22, 2024

/retest

@Sgitario Sgitario added the needs-update Pull requests that need to be updated to address issues found by dev or QE reviewers label Apr 3, 2024
@mstead
Copy link
Contributor Author

mstead commented Apr 22, 2024

@Sgitario I made the changes in multiple commits. I'm happy to squash them down when you finish your next round of reviewing.

@mstead mstead removed the needs-update Pull requests that need to be updated to address issues found by dev or QE reviewers label Apr 22, 2024
@mstead
Copy link
Contributor Author

mstead commented Apr 22, 2024

/retest

Copy link
Contributor

@Sgitario Sgitario left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm now.
Though, there are some failing tests that need fixing.

@Sgitario Sgitario added Dev/approved Pull requests that have been approved by all assigned developers needs-update Pull requests that need to be updated to address issues found by dev or QE reviewers and removed Dev Pull requests that need developer review labels Apr 23, 2024
@mstead
Copy link
Contributor Author

mstead commented Apr 23, 2024

lgtm now. Though, there are some failing tests that need fixing.

Tests are clean locally. Trying to figure out why they are failing for the PR.

@mstead mstead requested a review from san7ket April 23, 2024 12:52
@san7ket
Copy link
Contributor

san7ket commented Apr 23, 2024

The failures may be unrelated, I will take a look on those

@mstead
Copy link
Contributor Author

mstead commented Apr 23, 2024

/retest

3 similar comments
@san7ket
Copy link
Contributor

san7ket commented Apr 24, 2024

/retest

@san7ket
Copy link
Contributor

san7ket commented Apr 24, 2024

/retest

@san7ket
Copy link
Contributor

san7ket commented Apr 25, 2024

/retest

@mstead mstead removed the needs-update Pull requests that need to be updated to address issues found by dev or QE reviewers label Apr 25, 2024
@san7ket
Copy link
Contributor

san7ket commented Apr 26, 2024

Can confirm auto opt-in after triggering metering, doesn't happen anymore.

app.rhsm_subscriptions.perform_metering("rosa")
send: b'POST /api/swatch-metrics/v1/internal/metering/OpenShift-metrics?orgId=3340851&endDate=2024-04-26T14:00:00Z&rangeInMinutes=240 HTTP/1.1\r\nHost: swatch-metrics-service:8000\r\nUser-Agent: python-requests/2.31.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\nContent-Type: application/json\r\nOrigin: https://cloud.redhat.com\r\nx-rh-swatch-psk: placeholder\r\nContent-Length: 0\r\n\r\n'
reply: 'HTTP/1.1 204 No Content\r\n'

In [5]: app.rhsm_subscriptions.rest_client.default_api.get_opt_in_config()
send: b'GET /api/rhsm-subscriptions/v1/opt-in HTTP/1.1\r\nHost: swatch-api-service.ephemeral-83ouge.svc:8000\r\nAccept-Encoding: identity\r\nAccept: application/vnd.api+json\r\nUser-Agent: OpenAPI-Generator/1.0.0/python\r\nOrigin: http://swatch-api-service\r\nX-4Scale-Env: qa\r\nx-rh-identity:==\r\nContent-Type: application/json\r\n\r\n'
2024-04-26 12:59:54.390 [    INFO] [iqe.base.rest_client] REST: METHOD=GET, request_id=None, params=[]
Out[5]: {'data': {'opt_in_complete': False}, 'meta': {'org_id': '3340851'}}

@san7ket
Copy link
Contributor

san7ket commented Apr 26, 2024

/retest

Copy link
Contributor

@san7ket san7ket left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pending the opt-in issue, everything else looks good. Updating the tests in-progress

@mstead
Copy link
Contributor Author

mstead commented Apr 30, 2024

Can confirm auto opt-in after triggering metering, doesn't happen anymore.

To add some context around why this opt-in isn't happening for this failing test...

When this test makes a metering request to the metering service, there are no new metrics pulled from prometheus and results in no event messages getting put on the event topic. Therefore, the Event processing logic on the worker service does not get triggered which does not trigger opt-in.

This worked prior to these changes because a clean up event would always be sent, regardless of whether or not metrics were returned by prometheus, resulting in the opt-in logic getting hit on the worker service.

Unless we are able to get prometheus to actually return metrics during the metering request, this test will fail. As I understand it, this is only able to be done in the stage environment and not in EE.

@mstead
Copy link
Contributor Author

mstead commented Apr 30, 2024

@san7ket Please confirm based on last weeks discussion:

  • The test will be updated to run in the stage environment only
  • A test will be added to test that Events can not be added directly with the API that have a negative measurement value.

I will add the logic to prevent the negative measurement value via the API (already on the message ingestion side) and we will let the NEW test fail until this change goes in.

@san7ket
Copy link
Contributor

san7ket commented Apr 30, 2024

/retest

1 similar comment
@Aurobinda55
Copy link

/retest

@mstead
Copy link
Contributor Author

mstead commented May 2, 2024

/retest

@san7ket
Copy link
Contributor

san7ket commented May 3, 2024

/retest

@mstead
Copy link
Contributor Author

mstead commented May 3, 2024

@san7ket The failure appears to be a test issue, but I need to confirm on Monday to make sure that it isn't a regression from my latest changes.

mstead added 9 commits May 6, 2024 10:30
When processing incoming Events from either the topic or Admin API,
we now check if there are any events that are in conflict, and resolve
them when required.

A conflict occurs when there is an existing event that matches the
EventKey (org_id, event_type, event_source, instance_id, timestamp)
of the incoming event. In the case of a match, conflict resolution
must occur.

Example Conflict/Resolution scenarios include:

1. No Conflicts
 - Incoming event is valid and no resolution required.
 - An EventRecord is created from the incoming Event.
2. Conflicting event with equal measurements
 - Incoming Event is ignored.
 - Existing EventRecord remains as is with no updates.
3. Conflicting event with different measurements
 - EventRecord added with a negative accumulative measurement value
   (resetting the current the total for the timestamp to 0).
 - Incoming EventRecord measurements reflect the values at this
   timestamp.
  - e.g. An incoming event has a measurement of 10 cores. An existing EventRecord matching
         the tuple has a measurement of 20 cores. An EventRecord would be created with
         a cores value of -10 and another EventRecord created to apply the new value of 20
         cores.

NOTES:
1. Removed the CleanupEvent

We no longer need to send/process cleanup events since the EventConflictResolver will
address any existing events so that they are reflected when tallied.
When there is an error processing a batch of incoming events,
an attempt is made to process them one by one before failing
completely. Conflicts will need to be resolved in this case
as well.
Amendment type is null for initial events, and is
set to DEDUCTION when a deduction was required to
resolve the event conflicts.
This change serves two purposes:
1. Any incoming event must have an instance_id in order
   to be processed as a service instance event. Do not
   attempt to process the event in this case.
2. During the transition away from cleanup events,
   checking for the existance of the instance_id will
   keep the logs a little quieter when a cleanup event
   is picked up. Skipped when detected.
Needed to activate the 'test-inventory' profile since the
worker profile requires the inventory database. Without this
profile the connection to the inventory DB was not being
configured to match the test container DB.
@san7ket san7ket added QE/approved Pull requests that have been approved by all assigned QEs and removed QE Pull request should be approved by QE before merge labels May 7, 2024
@mstead mstead merged commit eb4e558 into main May 7, 2024
5 checks passed
@mstead mstead deleted the mstead/SWATCH-1622 branch May 7, 2024 13:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Dev/approved Pull requests that have been approved by all assigned developers QE/approved Pull requests that have been approved by all assigned QEs
Projects
None yet
4 participants