Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Policy API: Add soft delete feature #108

Closed
jrschumacher opened this issue Jan 31, 2024 · 6 comments · Fixed by #191
Closed

Policy API: Add soft delete feature #108

jrschumacher opened this issue Jan 31, 2024 · 6 comments · Fixed by #191
Assignees
Labels
adr Architecture Decision Records pertaining to OpenTDF comp:policy Policy Configuration ( attributes, subject mappings, resource mappings, kas registry) enhancement New feature or request

Comments

@jrschumacher
Copy link
Member

jrschumacher commented Jan 31, 2024

ADR: Soft deletes should cascade from namespaces -> attribute definitions -> attribute values

Taken from comment below #108 (comment)

Background

In our Policy Config table schema, we have a Foreign Key (FK) relationship from namespaces to attribute definitions, and another FK relationship from attribute definitions to attribute values. We have decided that due to the scenario above in the description of this issue, we want to rely on soft-deletes to avoid accidental or malicious creations of attributes/values in the place of their deleted counterparts.

If we were relying on hard deletes, we would be given certain benefits by the relational FK constraint when deleting so that we could either:

  1. cascade a delete from an attribute definition to its values, OR
  2. prevent deleting an attribute unless its associated values had been deleted first

These benefits of our schema and chosen DB would prevent unintended side effects and require thoughtful behavior on the part of platform admins. However, now that we are restricting hard deletes to dangerous/special rpc's and specific "superadmin-esque" functionalities for known dangerous mutations by adding active/inactive state to these three tables, we need to decide the cascading nature of soft deletes with inactive state.

Chosen Option:

Considered Options: Rely on PostgreSQL triggers on UPDATEs to state to cascade down

  1. Rely on PostgreSQL triggers on UPDATEs to state to cascade down
  2. Rely on the server's db layer to make DB queries that cascade the soft deletion down
  3. Allow INACTIVE namespaces with active attribute definitions/values, and INACTIVE definitions with ACTIVE namespaces and values

Option 1: Rely on PostgreSQL triggers on UPDATEs to state to cascade down

Postgres triggers allow us to define the cascade behavior as the platform maintainers. Keeping the functionality within Postgres and not the server has additional benefits.

  • 🟩 Good, because cascading behavior of inactive state makes the most sense when the user intention is to delete (which is still going to be a relatively dangerous mutation)
  • 🟩 Good, because keeping the cascade in the DB is always going to be more optimal than multiple queries
  • 🟩 Good, because we are indexing on the state column in the three tables for speed of lookup/update
  • 🟩 Good, because it has already been proven out with an integration test for repeatability in this branch
    • 🟩 Good, because this does not block any superadmin/dangerous/special deletion capability and will be fully distinct from any cascade/constraint handling there
  • 🟨 Neutral, because triggers are a Postgres feature, but we haven't made any firm decisions yet about what other SQL databases/versions we'll support or if we'll require customers to use the latest PostgreSQL
  • 🟥 Bad, because it's a less well-known feature of Postgres
  • 🟥 Bad, because we will only be able to ALWAYS cascade the INACTIVE UPDATE down the tree and will not get the foreign key constraint of a one-off deletion if that's what the user really intended. We'll need to make it clear to them what their change will do.

Option 2: Rely on the server's db layer to make DB queries that cascade the soft deletion down

The same as option 1, but with the cascading logic put into server-driven queries and not Postgres triggers.

  • 🟩 Good, because it does not tie us to any Postgres-specific feature and can be reused across SQL db's
  • 🟩 Good, because of all the other good benefits of option 1
  • 🟥 Bad, because performance: anything being soft deleted will mean multiple round trips
  • 🟥 Bad, because more room for bugs: anything being soft deleted will mean multiple queries
  • 🟥 Bad, because we can more easily end up in a bad state where the server fails or a secondary/tertiary query fails but the first succeeded

Option 3. Allow INACTIVE namespaces with active attribute definitions/values, and INACTIVE definitions with ACTIVE namespaces and values

  • 🟩 Good, because it gives maximum control to the user
  • 🟥 Bad, because that maximized control is actually more confusing
  • 🟥 Bad, because is most likely to cause a bad state where access is not allowed but an unknown reason
  • 🟥 Bad, because it is unintuitive from an Engineering/maintenance perspective

As a platform maintainer, I want to make sure that data which is deleted is soft-deleted so that I can prevent dangerous side effects and restore accidental deletes.

There are situations where the side effect of a delete could result in data leak if two admins are maintaining the platform. Example:

  • Admin A adds attribute demo.com/attr/Classification/value/TopTopSecret
    • Creates subject mapping with Deep Secret Spy
  • User A creates TDF SecretSpy-SecretSantas-MailingList.csv.tdf with demo.com/attr/Classification/value/TopTopSecret
  • Admin A deletes attribute demo.com/attr/Classification/value/TopTopSecret
  • Admin B add attribute demo.com/attr/Classification/value/TopTopSecret
    • and creates subject mapping with Top Secret Toy Inventor of Tops
  • User B with Top Secret Toy Inventor of Tops subject attribute accesses SecretSpy-SecretSantas-MailingList.csv.tdf

The soft-delete feature will prevent the recreation of the attribute with the same name on the same namespace.

Acceptance Criteria

@strantalis strantalis added comp:attributes comp:policy Policy Configuration ( attributes, subject mappings, resource mappings, kas registry) enhancement New feature or request labels Feb 1, 2024
@strantalis
Copy link
Member

relates to #96

@jakedoublev
Copy link
Contributor

jakedoublev commented Feb 13, 2024

ADR: Soft deletes should cascade from namespaces -> attribute definitions -> attribute values

Background

In our Policy Config table schema, we have a Foreign Key (FK) relationship from namespaces to attribute definitions, and another FK relationship from attribute definitions to attribute values. We have decided that due to the scenario above in the description of this issue, we want to rely on soft-deletes to avoid accidental or malicious creations of attributes/values in the place of their deleted counterparts.

If we were relying on hard deletes, we would be given certain benefits by the relational FK constraint when deleting so that we could either:

  1. cascade a delete from an attribute definition to its values, OR
  2. prevent deleting an attribute unless its associated values had been deleted first

These benefits of our schema and chosen DB would prevent unintended side effects and require thoughtful behavior on the part of platform admins. However, now that we are restricting hard deletes to dangerous/special rpc's and specific "superadmin-esque" functionalities for known dangerous mutations by adding active/inactive state to these three tables, we need to decide the cascading nature of soft deletes with inactive state.

Chosen Option:

Considered Options: Rely on PostgreSQL triggers on UPDATEs to state to cascade down

  1. Rely on PostgreSQL triggers on UPDATEs to state to cascade down
  2. Rely on the server's db layer to make DB queries that cascade the soft deletion down
  3. Allow INACTIVE namespaces with active attribute definitions/values, and INACTIVE definitions with ACTIVE namespaces and values

Option 1: Rely on PostgreSQL triggers on UPDATEs to state to cascade down

Postgres triggers allow us to define the cascade behavior as the platform maintainers. Keeping the functionality within Postgres and not the server has additional benefits.

  • 🟩 Good, because cascading behavior of inactive state makes the most sense when the user intention is to delete (which is still going to be a relatively dangerous mutation)
  • 🟩 Good, because keeping the cascade in the DB is always going to be more optimal than multiple queries
  • 🟩 Good, because we are indexing on the state column in the three tables for speed of lookup/update
  • 🟩 Good, because it has already been proven out with an integration test for repeatability in this branch
    • 🟩 Good, because this does not block any superadmin/dangerous/special deletion capability and will be fully distinct from any cascade/constraint handling there
  • 🟨 Neutral, because triggers are a Postgres feature, but we haven't made any firm decisions yet about what other SQL databases/versions we'll support or if we'll require customers to use the latest PostgreSQL
  • 🟥 Bad, because it's a less well-known feature of Postgres
  • 🟥 Bad, because we will only be able to ALWAYS cascade the INACTIVE UPDATE down the tree and will not get the foreign key constraint of a one-off deletion if that's what the user really intended. We'll need to make it clear to them what their change will do.

Option 2: Rely on the server's db layer to make DB queries that cascade the soft deletion down

The same as option 1, but with the cascading logic put into server-driven queries and not Postgres triggers.

  • 🟩 Good, because it does not tie us to any Postgres-specific feature and can be reused across SQL db's
  • 🟩 Good, because of all the other good benefits of option 1
  • 🟥 Bad, because performance: anything being soft deleted will mean multiple round trips
  • 🟥 Bad, because more room for bugs: anything being soft deleted will mean multiple queries
  • 🟥 Bad, because we can more easily end up in a bad state where the server fails or a secondary/tertiary query fails but the first succeeded

Option 3. Allow INACTIVE namespaces with active attribute definitions/values, and INACTIVE definitions with ACTIVE namespaces and values

  • 🟩 Good, because it gives maximum control to the user
  • 🟥 Bad, because that maximized control is actually more confusing
  • 🟥 Bad, because is most likely to cause a bad state where access is not allowed but an unknown reason
  • 🟥 Bad, because it is unintuitive from an Engineering/maintenance perspective

@jakedoublev jakedoublev added the adr Architecture Decision Records pertaining to OpenTDF label Feb 13, 2024
@jrschumacher
Copy link
Member Author

jrschumacher commented Feb 13, 2024

@jakedoublev would you do some research whether this would be supported in other DBs? If not, how would we go about supporting it?

Could we utilize this approach for Postgres and then in future DBs we fall back to Option 3 or implement Option 2 in a driver approach? Seems like we could say "Postgres is the most performant DB we support, but we also support X, Y, and Z with some performance impact during these operations.

Lastly, consider the estimated frequency of usage:

  • Read - VERY HIGH
  • Write - HIGH
  • Update - LOW - MEDIUM
  • Delete - VERY LOW - LOW

@biscoe916
Copy link
Member

Thanks for putting this together @jakedoublev.

To be honest, I'm not sure if performance is a realistic concern here. It seems most of the time this action will be run as a one off. Are there use-cases I'm not considering where the multiple queries to complete a soft delete will be problematic?

With that said, I'm in favor of option 1, with the caveat that if in the future we decide to support databases other than Postgres, we switch to option 2 for all configurations so that we don't have 2 solutions to the same problem.

@jakedoublev
Copy link
Contributor

would you do some research whether this would be supported in other DBs? @jrschumacher

It turns out support for sql triggers was wider than I anticipated. There are some differences in syntax and may be a little variation in Postgres cloud to cloud, but some semblance of SQL triggers exist across all of these.

DB Support for Triggers Reference Link
MySQL docs
Oracle docs
IBM Db2 docs
SQLite docs

Could we utilize this approach for Postgres and then in future DBs we fall back to Option 3 or implement Option 2 in a driver approach? @jrschumacher

I think this is now the second time we've considered doubling down on Postgres's capabilities (see the metadata discussion here). I personally feel these are both small things to refactor if/when a need arises to support multiple DBs. To @biscoe916's point, avoiding 2 solutions to the same problem at the time we support multiple DBs will likely mean moving anything beyond basic SQL into the server anyway for the clearest path to broadest relational DB support.

To be honest, I'm not sure if performance is a realistic concern here. It seems most of the time this action will be run as a one off. Are there use-cases I'm not considering where the multiple queries to complete a soft delete will be problematic? @biscoe916

I think you're right and performance is indeed not a concern because of the infrequency of these deletions. It's something I felt/feel is always worth calling out, but realistically you are correct that there should be no felt impact by an end user.

With that said, I'm in favor of option 1, with the caveat that if in the future we decide to support databases other than Postgres, we switch to option 2 for all configurations so that we don't have 2 solutions to the same problem. @biscoe916

Thanks for the feedback! This makes sense and I will consider it the path forward.

jrschumacher added a commit that referenced this issue Feb 17, 2024
…efinitions, attribute values #96 #108 (#191)

This work encompasses the following (including multiple breaking changes):
1. Tables `namespaces`, `attribute_definitions`, `attribute_values` updated via migration to add `active` boolean state
2. cascading deactivation from namespace -> attr -> values (in DB implementation via SQL trigger function provided in migration up/down)
3. integration tests:
    - cascading behavior ns -> attr -> val
    - integration tests proving no deactivation bubbling up behavior val -> attr -> ns
4. protos:
    - updated to provide a state enum back on all 3 resources (with helpful comments about defaults)
    - new example grpcurl requests/responses with these updates
    - all three LIST rpc's filterable by state as a common Message type (including an ANY enum option that returns both active TRUE and FALSE rows) and defaulting to ACTIVE if not specified
5. preservation of DB delete functionality/tests which will be exposed in newly separate rpc's

This PR does _not_ include:
1. Unsafe RPCs for actual deletion: #115
2. Unsafe RPCs for dangerous mutations (same issue)
3. Prevention of update mutation of INACTIVE namespaces/attributes/attributeValues
4. Prevention of creation of new attributes/values on a prior created then deactivated namespace
5. Provision of parent or child state in GET responses beyond that of the resource requested (i.e. namespace & value state are not given in a GET for an attribute, even though the attribute's state is given)

---------

Co-authored-by: Ryan Schumacher <jschumacher@virtru.com>
@dmihalcik-virtru
Copy link
Member

Two more disadvantages to this approach

  • Google Cloud Spanner does not yet support Triggers
  • Deleting a parent or grandparent object will cause changes to the rows for the children and grandchildren. This means if I toggle visibility of a namespace object, all corresponding attributes and instance values will be left in a 'deleted' state. If I'd already had some marked as 'deleted', it will be difficult to sort through and undelete the recently deleted items only

github-merge-queue bot pushed a commit that referenced this issue Apr 22, 2024
🤖 I have created a release *beep* *boop*
---


##
[0.1.0](sdk-v0.1.0...sdk/v0.1.0)
(2024-04-22)


### Features

* add structured schema policy config
([#51](#51))
([8a6b876](8a6b876))
* **auth:** add authorization via casbin
([#417](#417))
([292f2bd](292f2bd))
* in-process service to service communication
([#311](#311))
([ec5eb76](ec5eb76))
* **kas:** support HSM and standard crypto
([#497](#497))
([f0cbe03](f0cbe03))
* key access server assignments
([#111](#111))
([a48d686](a48d686)),
closes [#117](#117)
* key access server registry impl
([#66](#66))
([cf6b3c6](cf6b3c6))
* **namespaces CRUD:** protos, generated SDK, db interactivity for
namespaces table ([#54](#54))
([b3f32b1](b3f32b1))
* **PLAT-3112:** Initial consumption of ec_key_pair functions by nanotdf
([#586](#586))
([5e2cba0](5e2cba0))
* **policy:** add FQN pivot table
([#208](#208))
([abb734c](abb734c))
* **policy:** add soft-delete/deactivation to namespaces, attribute
definitions, attribute values
[#96](#96)
[#108](#108)
([#191](#191))
([02e92a6](02e92a6))
* **resourcemapping:** resource mapping implementation
([#83](#83))
([c144db1](c144db1))
* **sdk:** BACK-1966 get auth wired up to SDK using `Options`
([#271](#271))
([f1bacab](f1bacab))
* **sdk:** BACK-1966 implement fetching a DPoP token
([#45](#45))
([dbd3cf9](dbd3cf9))
* **sdk:** BACK-1966 make the unwrapper retrieve public keys as well
([#260](#260))
([7d051a1](7d051a1))
* **sdk:** BACK-1966 pull rewrap into auth config
([#252](#252))
([84017aa](84017aa))
* **sdk:** Include auth token in grpc
([#367](#367))
([75cb5cd](75cb5cd))
* **sdk:** normalize token exchange
([#546](#546))
([9059dff](9059dff))
* **sdk:** Pass dpop key through to `rewrap`
([#435](#435))
([2d283de](2d283de))
* **sdk:** read `expires_in` from token response and use it to refresh
access tokens ([#445](#445))
([8ecbe79](8ecbe79))
* **sdk:** sdk stub
([#10](#10))
([8dfca6a](8dfca6a))
* **sdk:** take a function so that callers can use this the way that
they want ([#340](#340))
([72059cb](72059cb))
* **subject-mappings:** refactor to meet db schema
([#59](#59))
([59a073b](59a073b))
* **tdf:** implement tdf3 encrypt and decrypt
([#73](#73))
([9d0e0a0](9d0e0a0))
* **tdf:** sdk interface changes
([#123](#123))
([2aa2422](2aa2422))
* **tdf:** sdk interface cleanup
([#201](#201))
([6f7d815](6f7d815))
* **tdf:** TDFOption varargs interface
([#235](#235))
([b3fb720](b3fb720))


### Bug Fixes

* **archive:** remove 10gb zip file test
([#373](#373))
([6548f55](6548f55))
* attribute missing rpc method for listing attribute values
([#69](#69))
([1b3a831](1b3a831))
* **attribute value:** fixes attribute value crud
([#86](#86))
([568df9c](568df9c))
* **issue 90:** remove duplicate attribute_id from attribute value
create/update, and consumes schema setup changes in namespaces that were
introduced for integration testing
([#100](#100))
([e0f6d07](e0f6d07))
* **issue-124:** SDK kas registry import name mismatch
([#125](#125))
([112638b](112638b)),
closes [#124](#124)
* **proto/acre:** fix resource encoding service typo
([#30](#30))
([fe709d2](fe709d2))
* remove padding when b64 encoding
([#437](#437))
([d40e94a](d40e94a))
* SDK Quickstart
([#628](#628))
([f27ab98](f27ab98))
* **sdk:** change unwrapper creation
([#346](#346))
([9206435](9206435))
* **sdk:** double bearer token in auth config
([#350](#350))
([1bf4699](1bf4699))
* **sdk:** fixes Manifests JSONs with OIDC
([#140](#140))
([a4b6937](a4b6937))
* **sdk:** handle err
([#548](#548))
([ebabb6c](ebabb6c))
* **sdk:** make KasInfo fields public
([#320](#320))
([9a70498](9a70498))
* **sdk:** shutdown conn
([#352](#352))
([3def038](3def038))
* **sdk:** temporarily move unwrapper creation into options func.
([#309](#309))
([b34c2fe](b34c2fe))
* **sdk:** use the dialoptions even with no client credentials
([#400](#400))
([a7f1908](a7f1908))
* **security:** add a new encryption keypair different from dpop keypair
([#461](#461))
([7deb51e](7deb51e))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

Co-authored-by: opentdf-automation[bot] <149537512+opentdf-automation[bot]@users.noreply.github.com>
tech-guru42 added a commit to tech-guru42/TDF that referenced this issue Jun 3, 2024
🤖 I have created a release *beep* *boop*
---


##
[0.1.0](opentdf/platform@sdk-v0.1.0...sdk/v0.1.0)
(2024-04-22)


### Features

* add structured schema policy config
([#51](opentdf/platform#51))
([8a6b876](opentdf/platform@8a6b876))
* **auth:** add authorization via casbin
([#417](opentdf/platform#417))
([292f2bd](opentdf/platform@292f2bd))
* in-process service to service communication
([#311](opentdf/platform#311))
([ec5eb76](opentdf/platform@ec5eb76))
* **kas:** support HSM and standard crypto
([#497](opentdf/platform#497))
([f0cbe03](opentdf/platform@f0cbe03))
* key access server assignments
([#111](opentdf/platform#111))
([a48d686](opentdf/platform@a48d686)),
closes [#117](opentdf/platform#117)
* key access server registry impl
([#66](opentdf/platform#66))
([cf6b3c6](opentdf/platform@cf6b3c6))
* **namespaces CRUD:** protos, generated SDK, db interactivity for
namespaces table ([#54](opentdf/platform#54))
([b3f32b1](opentdf/platform@b3f32b1))
* **PLAT-3112:** Initial consumption of ec_key_pair functions by nanotdf
([#586](opentdf/platform#586))
([5e2cba0](opentdf/platform@5e2cba0))
* **policy:** add FQN pivot table
([#208](opentdf/platform#208))
([abb734c](opentdf/platform@abb734c))
* **policy:** add soft-delete/deactivation to namespaces, attribute
definitions, attribute values
[#96](opentdf/platform#96)
[#108](opentdf/platform#108)
([#191](opentdf/platform#191))
([02e92a6](opentdf/platform@02e92a6))
* **resourcemapping:** resource mapping implementation
([#83](opentdf/platform#83))
([c144db1](opentdf/platform@c144db1))
* **sdk:** BACK-1966 get auth wired up to SDK using `Options`
([#271](opentdf/platform#271))
([f1bacab](opentdf/platform@f1bacab))
* **sdk:** BACK-1966 implement fetching a DPoP token
([#45](opentdf/platform#45))
([dbd3cf9](opentdf/platform@dbd3cf9))
* **sdk:** BACK-1966 make the unwrapper retrieve public keys as well
([#260](opentdf/platform#260))
([7d051a1](opentdf/platform@7d051a1))
* **sdk:** BACK-1966 pull rewrap into auth config
([#252](opentdf/platform#252))
([84017aa](opentdf/platform@84017aa))
* **sdk:** Include auth token in grpc
([#367](opentdf/platform#367))
([75cb5cd](opentdf/platform@75cb5cd))
* **sdk:** normalize token exchange
([#546](opentdf/platform#546))
([9059dff](opentdf/platform@9059dff))
* **sdk:** Pass dpop key through to `rewrap`
([#435](opentdf/platform#435))
([2d283de](opentdf/platform@2d283de))
* **sdk:** read `expires_in` from token response and use it to refresh
access tokens ([#445](opentdf/platform#445))
([8ecbe79](opentdf/platform@8ecbe79))
* **sdk:** sdk stub
([#10](opentdf/platform#10))
([8dfca6a](opentdf/platform@8dfca6a))
* **sdk:** take a function so that callers can use this the way that
they want ([#340](opentdf/platform#340))
([72059cb](opentdf/platform@72059cb))
* **subject-mappings:** refactor to meet db schema
([#59](opentdf/platform#59))
([59a073b](opentdf/platform@59a073b))
* **tdf:** implement tdf3 encrypt and decrypt
([#73](opentdf/platform#73))
([9d0e0a0](opentdf/platform@9d0e0a0))
* **tdf:** sdk interface changes
([#123](opentdf/platform#123))
([2aa2422](opentdf/platform@2aa2422))
* **tdf:** sdk interface cleanup
([#201](opentdf/platform#201))
([6f7d815](opentdf/platform@6f7d815))
* **tdf:** TDFOption varargs interface
([#235](opentdf/platform#235))
([b3fb720](opentdf/platform@b3fb720))


### Bug Fixes

* **archive:** remove 10gb zip file test
([#373](opentdf/platform#373))
([6548f55](opentdf/platform@6548f55))
* attribute missing rpc method for listing attribute values
([#69](opentdf/platform#69))
([1b3a831](opentdf/platform@1b3a831))
* **attribute value:** fixes attribute value crud
([#86](opentdf/platform#86))
([568df9c](opentdf/platform@568df9c))
* **issue 90:** remove duplicate attribute_id from attribute value
create/update, and consumes schema setup changes in namespaces that were
introduced for integration testing
([#100](opentdf/platform#100))
([e0f6d07](opentdf/platform@e0f6d07))
* **issue-124:** SDK kas registry import name mismatch
([#125](opentdf/platform#125))
([112638b](opentdf/platform@112638b)),
closes [#124](opentdf/platform#124)
* **proto/acre:** fix resource encoding service typo
([#30](opentdf/platform#30))
([fe709d2](opentdf/platform@fe709d2))
* remove padding when b64 encoding
([#437](opentdf/platform#437))
([d40e94a](opentdf/platform@d40e94a))
* SDK Quickstart
([#628](opentdf/platform#628))
([f27ab98](opentdf/platform@f27ab98))
* **sdk:** change unwrapper creation
([#346](opentdf/platform#346))
([9206435](opentdf/platform@9206435))
* **sdk:** double bearer token in auth config
([#350](opentdf/platform#350))
([1bf4699](opentdf/platform@1bf4699))
* **sdk:** fixes Manifests JSONs with OIDC
([#140](opentdf/platform#140))
([a4b6937](opentdf/platform@a4b6937))
* **sdk:** handle err
([#548](opentdf/platform#548))
([ebabb6c](opentdf/platform@ebabb6c))
* **sdk:** make KasInfo fields public
([#320](opentdf/platform#320))
([9a70498](opentdf/platform@9a70498))
* **sdk:** shutdown conn
([#352](opentdf/platform#352))
([3def038](opentdf/platform@3def038))
* **sdk:** temporarily move unwrapper creation into options func.
([#309](opentdf/platform#309))
([b34c2fe](opentdf/platform@b34c2fe))
* **sdk:** use the dialoptions even with no client credentials
([#400](opentdf/platform#400))
([a7f1908](opentdf/platform@a7f1908))
* **security:** add a new encryption keypair different from dpop keypair
([#461](opentdf/platform#461))
([7deb51e](opentdf/platform@7deb51e))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

Co-authored-by: opentdf-automation[bot] <149537512+opentdf-automation[bot]@users.noreply.github.com>
passion-127 added a commit to passion-127/TDF that referenced this issue Jun 6, 2024
🤖 I have created a release *beep* *boop*
---


##
[0.1.0](opentdf/platform@sdk-v0.1.0...sdk/v0.1.0)
(2024-04-22)


### Features

* add structured schema policy config
([#51](opentdf/platform#51))
([8a6b876](opentdf/platform@8a6b876))
* **auth:** add authorization via casbin
([#417](opentdf/platform#417))
([292f2bd](opentdf/platform@292f2bd))
* in-process service to service communication
([#311](opentdf/platform#311))
([ec5eb76](opentdf/platform@ec5eb76))
* **kas:** support HSM and standard crypto
([#497](opentdf/platform#497))
([f0cbe03](opentdf/platform@f0cbe03))
* key access server assignments
([#111](opentdf/platform#111))
([a48d686](opentdf/platform@a48d686)),
closes [#117](opentdf/platform#117)
* key access server registry impl
([#66](opentdf/platform#66))
([cf6b3c6](opentdf/platform@cf6b3c6))
* **namespaces CRUD:** protos, generated SDK, db interactivity for
namespaces table ([#54](opentdf/platform#54))
([b3f32b1](opentdf/platform@b3f32b1))
* **PLAT-3112:** Initial consumption of ec_key_pair functions by nanotdf
([#586](opentdf/platform#586))
([5e2cba0](opentdf/platform@5e2cba0))
* **policy:** add FQN pivot table
([#208](opentdf/platform#208))
([abb734c](opentdf/platform@abb734c))
* **policy:** add soft-delete/deactivation to namespaces, attribute
definitions, attribute values
[#96](opentdf/platform#96)
[#108](opentdf/platform#108)
([#191](opentdf/platform#191))
([02e92a6](opentdf/platform@02e92a6))
* **resourcemapping:** resource mapping implementation
([#83](opentdf/platform#83))
([c144db1](opentdf/platform@c144db1))
* **sdk:** BACK-1966 get auth wired up to SDK using `Options`
([#271](opentdf/platform#271))
([f1bacab](opentdf/platform@f1bacab))
* **sdk:** BACK-1966 implement fetching a DPoP token
([#45](opentdf/platform#45))
([dbd3cf9](opentdf/platform@dbd3cf9))
* **sdk:** BACK-1966 make the unwrapper retrieve public keys as well
([#260](opentdf/platform#260))
([7d051a1](opentdf/platform@7d051a1))
* **sdk:** BACK-1966 pull rewrap into auth config
([#252](opentdf/platform#252))
([84017aa](opentdf/platform@84017aa))
* **sdk:** Include auth token in grpc
([#367](opentdf/platform#367))
([75cb5cd](opentdf/platform@75cb5cd))
* **sdk:** normalize token exchange
([#546](opentdf/platform#546))
([9059dff](opentdf/platform@9059dff))
* **sdk:** Pass dpop key through to `rewrap`
([#435](opentdf/platform#435))
([2d283de](opentdf/platform@2d283de))
* **sdk:** read `expires_in` from token response and use it to refresh
access tokens ([#445](opentdf/platform#445))
([8ecbe79](opentdf/platform@8ecbe79))
* **sdk:** sdk stub
([#10](opentdf/platform#10))
([8dfca6a](opentdf/platform@8dfca6a))
* **sdk:** take a function so that callers can use this the way that
they want ([#340](opentdf/platform#340))
([72059cb](opentdf/platform@72059cb))
* **subject-mappings:** refactor to meet db schema
([#59](opentdf/platform#59))
([59a073b](opentdf/platform@59a073b))
* **tdf:** implement tdf3 encrypt and decrypt
([#73](opentdf/platform#73))
([9d0e0a0](opentdf/platform@9d0e0a0))
* **tdf:** sdk interface changes
([#123](opentdf/platform#123))
([2aa2422](opentdf/platform@2aa2422))
* **tdf:** sdk interface cleanup
([#201](opentdf/platform#201))
([6f7d815](opentdf/platform@6f7d815))
* **tdf:** TDFOption varargs interface
([#235](opentdf/platform#235))
([b3fb720](opentdf/platform@b3fb720))


### Bug Fixes

* **archive:** remove 10gb zip file test
([#373](opentdf/platform#373))
([6548f55](opentdf/platform@6548f55))
* attribute missing rpc method for listing attribute values
([#69](opentdf/platform#69))
([1b3a831](opentdf/platform@1b3a831))
* **attribute value:** fixes attribute value crud
([#86](opentdf/platform#86))
([568df9c](opentdf/platform@568df9c))
* **issue 90:** remove duplicate attribute_id from attribute value
create/update, and consumes schema setup changes in namespaces that were
introduced for integration testing
([#100](opentdf/platform#100))
([e0f6d07](opentdf/platform@e0f6d07))
* **issue-124:** SDK kas registry import name mismatch
([#125](opentdf/platform#125))
([112638b](opentdf/platform@112638b)),
closes [#124](opentdf/platform#124)
* **proto/acre:** fix resource encoding service typo
([#30](opentdf/platform#30))
([fe709d2](opentdf/platform@fe709d2))
* remove padding when b64 encoding
([#437](opentdf/platform#437))
([d40e94a](opentdf/platform@d40e94a))
* SDK Quickstart
([#628](opentdf/platform#628))
([f27ab98](opentdf/platform@f27ab98))
* **sdk:** change unwrapper creation
([#346](opentdf/platform#346))
([9206435](opentdf/platform@9206435))
* **sdk:** double bearer token in auth config
([#350](opentdf/platform#350))
([1bf4699](opentdf/platform@1bf4699))
* **sdk:** fixes Manifests JSONs with OIDC
([#140](opentdf/platform#140))
([a4b6937](opentdf/platform@a4b6937))
* **sdk:** handle err
([#548](opentdf/platform#548))
([ebabb6c](opentdf/platform@ebabb6c))
* **sdk:** make KasInfo fields public
([#320](opentdf/platform#320))
([9a70498](opentdf/platform@9a70498))
* **sdk:** shutdown conn
([#352](opentdf/platform#352))
([3def038](opentdf/platform@3def038))
* **sdk:** temporarily move unwrapper creation into options func.
([#309](opentdf/platform#309))
([b34c2fe](opentdf/platform@b34c2fe))
* **sdk:** use the dialoptions even with no client credentials
([#400](opentdf/platform#400))
([a7f1908](opentdf/platform@a7f1908))
* **security:** add a new encryption keypair different from dpop keypair
([#461](opentdf/platform#461))
([7deb51e](opentdf/platform@7deb51e))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

Co-authored-by: opentdf-automation[bot] <149537512+opentdf-automation[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
adr Architecture Decision Records pertaining to OpenTDF comp:policy Policy Configuration ( attributes, subject mappings, resource mappings, kas registry) enhancement New feature or request
Projects
None yet
5 participants