Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SLO datasources #931

Merged
merged 5 commits into from
Apr 30, 2021
Merged

Add SLO datasources #931

merged 5 commits into from
Apr 30, 2021

Conversation

adrien-f
Copy link
Contributor

Closes #911

Signed-off-by: Adrien FILLON adrien.fillon@manomano.com

@adrien-f
Copy link
Contributor Author

adrien-f commented Feb 17, 2021

Hi 👋

A few notes:

  • Quite inspired by what has been done for the monitor Data source
  • There's a bit of duplication between the single DS and the multiple DS, let me know if you'd like a refactor on this
  • I came across a bit of a refactor here (you moved the tests in its own package/folder), let me know if there was anything else missing
  • Exposed fields are basic as discussed in the original issue
  • Not sure which build/flags you used for tfplugindocs ? I only committed the new files but there were a few changes and missing docs (like docs/resources/slo_correction.md)

Let me know what you think!

@adrien-f adrien-f force-pushed the datasource-slo branch 3 times, most recently from eccdf0c to 7eba9e9 Compare February 17, 2021 20:26
Copy link
Contributor

@nmuesch nmuesch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @adrien-f Thanks for the PR, and bearing with us during the recent repo restructure, we're trying to organize things a bit. 🙂

We're currently running tfplugindocs with no flags, but only committing relevant changes as there's a couple final pieces we're working out there.

I haven't gone fully over the tests yet, but the core code seems good to me. I would like to discuss the single+multiple datasource approach though. It would be nice if we could find a way to get this into a single data-source since the object really is the same its just a difference of how many gets returned.

Perhaps we could have a flag in the config like multiple: true (with any name 😉) and differentiate the logic in the READ that way?

return err
}
d.SetId("datadog-service-level-objectives")
//d.SetId(fmt.Sprintf("%d", hashcode.String(req.String())))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this may have been leftover?


Use this data source to retrieve information about an existing SLO for use in other resources.


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To get an HCL config generated here, can you add an example to the examples/data-source folder?

@adrien-f adrien-f force-pushed the datasource-slo branch 3 times, most recently from 3e4f3b6 to 131b84a Compare February 26, 2021 14:54
@adrien-f
Copy link
Contributor Author

Hi 👋 Sorry for the delay!

I've rebased and added examples.

Regarding the issue of having one more two datasources, I really have no opinion on this 😅 Looking at the providers in the registry like AWS or Azure I see they often have two datasources:

I'm not sure there are any clear guidelines on this, I suppose there's a reason? Maybe you could get some pointers from Hashicorp here?

In any case, thanks again for this 🙂 Let's me know,

@nmuesch
Copy link
Contributor

nmuesch commented Mar 5, 2021

Hey @adrien-f Thanks for checking around and providing those links. Given that this pattern looks to be the standard, I think it's OK to proceed with two data sources.

I plan on taking another pass through the PR, but could you update the branch with master? We've added some more CI for linting and docs and it would be good to confirm this is succeeding on those 🙂

Copy link
Contributor

@nmuesch nmuesch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall these new data-sources look good! Thanks for working on this.
I left some inline notes, but they're mostly docs + test comments.

if err := d.Set("slos", slos); err != nil {
return err
}
d.SetId("datadog-service-level-objectives")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think since the data-source allows filtering, and isn't a "global"/"unique" data-source that returns all SLOs, we'll need to give it a unique ID. Maybe something similar to - https://github.com/DataDog/terraform-provider-datadog/blob/master/datadog/data_source_datadog_security_monitoring_rules.go#L140

Comment on lines 14 to 18
ctx, accProviders := testAccProviders(context.Background(), t)
uniq := strings.ReplaceAll(uniqueEntityName(ctx, t), "-", "_")
accProvider := testAccProvider(t, accProviders)

resource.ParallelTest(t, resource.TestCase{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, you caught us in a time of some refactors/improvements to the project 🙂
Could you update the beginning of these tests to match - https://github.com/DataDog/terraform-provider-datadog/blob/master/datadog/tests/data_source_datadog_dashboard_test.go#L13-L18

Comment on lines 14 to 18
ctx, accProviders := testAccProviders(context.Background(), t)
uniq := strings.ReplaceAll(uniqueEntityName(ctx, t), "-", "_")
accProvider := testAccProvider(t, accProviders)

resource.ParallelTest(t, resource.TestCase{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here on the test updates 🙇

Comment on lines 56 to 64
return resource.ComposeTestCheckFunc(
resource.TestCheckResourceAttrSet("data.datadog_service_level_objectives.foo", "slos.0.id"),
resource.TestCheckResourceAttr("data.datadog_service_level_objectives.foo", "slos.0.name", uniq),
resource.TestCheckResourceAttr("data.datadog_service_level_objectives.foo", "slos.0.type", "metric"),
)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it'd be helpful here to assert at least 2 SLOs are returned. Would you be able to add a new config for this?

Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"id": {
Description: "Id of Datadog service level objective",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Description: "Id of Datadog service level objective",
Description: "ID of the Datadog service level objective",

Computed: true,
},
"name": {
Description: "Name of Datadog service level objective",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Description: "Name of Datadog service level objective",
Description: "Name of the Datadog service level objective",

@adrien-f adrien-f force-pushed the datasource-slo branch 2 times, most recently from 7b53b88 to 34b43a4 Compare March 28, 2021 13:32
@adrien-f
Copy link
Contributor Author

Heya, apologies for the delay in my latest update.

I've updated the tests to include extra SLOs, let me know if you're satisfied with this. It also now computes a custom ID, I had to mess with pointers and I've tried to keep it concise enough and I've rebased everything.

Cheers,

adrien-f and others added 2 commits April 29, 2021 14:00
Closes DataDog#911

Signed-off-by: Adrien FILLON <adrien.fillon@manomano.com>
zippolyte
zippolyte previously approved these changes Apr 29, 2021
Copy link
Contributor

@zippolyte zippolyte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @adrien-f, sorry for the delay in taking another look at this.
This looks good to me, I've taken the liberty to rebase on master, as we've updated the go client which contained breaking changes. Hope that's fine.

@zippolyte
Copy link
Contributor

/azp run

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@zippolyte
Copy link
Contributor

/azp run

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@zippolyte zippolyte merged commit 7b826a8 into DataDog:master Apr 30, 2021
@adrien-f
Copy link
Contributor Author

adrien-f commented May 1, 2021

Awesome! Thanks a lot 🚀 !

@apollocreed
Copy link

Yeah thx a lot much to both of you much appreciated

skarimo added a commit that referenced this pull request May 20, 2021
* Fix datadog security monitoring default rule (#956)

* Fix typo from cases to case

* Do not persist state changes when modifying bad rule case

* Update default rule case notifications

* align default rule case notification recipients when rule case tracked in terraform configuration
* clear default rule case notification recipients when rule case removed from terraform configuration

* Remove broken tests

It is not possible to use both ImportState: True and Check

* Revert "Remove broken tests"

This reverts commit fed89bf.

* Reapply test changes, re-record

* Lint

Co-authored-by: Alexandre Trufanow <alexandre.trufanow@datadoghq.com>
Co-authored-by: Thomas Hervé <thomas.herve@datadoghq.com>

* Fix timeseries tests (#979)

Don't use removed metric

* Add support for dashboard json resource (#950)

* add dashboard_json resource

* refactor and use client exported request methods

* lint and move helper functions to util

* lint

* handle diffs on computed fields

* normalize json string

* add tests and refactor

* update go client

* lint

* update tests and lint

* apply code review requested changes

* apply code review suggestions and re-record cassettes

* generate resource docs

* fmt

* use mutex to only operate on one reosurce at a time (#981)

* [dashboard datasource] Retry on 504 errors (#975)

* [dashboard datasource] Retry on 504 errors
* Use TimeoutRead instead of TimeoutCreate

Signed-off-by: Jared Ledvina <jared@techsmix.net>

* Changelog for 2.23.0 (#986)

* Add datadog_metric_tag_configuration resource (#960)

* initial commit for metric tag configuration resource

* fixed build update to properly check metric_type before setting include_percentiles

* added first pass on tests for tag config resource

* added first pass on example for tag config resource

* changes to upgrade client to master

* fmt example

* fixed tag config schema and use proper id for api calls

* use proper provider for test

* capitalize test func name

* enable unstable endpoints, remove prints, use proper id for state, check include_percentiles

* add remaining diff from master merge in

* cleanup comment and imports

* golint

* fix test to use proper metric_name

* fix existing tag config check

* cassettes

* updated cassettes

* updated cassettes and import test name

* removed old cassettes

* cleanup comments and unneeded changes

* dont include percentiles in example

* added customizeDiff function for include_percentiles option

* removed completed todos

* fix error message

* fix error message

* updated example metric name and generated docs

* updated cassettes

* fix cassettes for import test

* update docs

* update doc comment

* added examples and updated docs

* update description, check status code for update

* update test check to check status code

* update docs

* added error test and cassette

* updated import cassette

* updated basic cassette

* check status code when importing tag configuration

* added test invalid import ID, update cassettes

* added metrics-agg as codeowner

* changes from 1st round of PR comments, first pass at handling [] tags

* update import cassette

* update basic cassette

* change regex, change validation to use all function

* tests: use DD_INSIDE_CI env (#990)

* Add legend_layout and legend_columns to timeseries widget definition [VIZZ-1207] (#992)

* Add legend_layout and legend_columns to timeseries widget def

* Update timeseries widget tests

* Update go deps

* Record cassettes

* Remove other updates

* Remove unwanted cassettes changes

* Make docs

* Update to latest commit

* List enum values in legend_layout description

* add missing enum description

* Make legend_columns a TypeSet

* Use TypeSet properly, fix tests

* Update docs

Co-authored-by: Thomas Hervé <thomas.herve@datadoghq.com>

* Update go client (#998)

Use released version

* Prepare release (#999)

* Cleanup imports (#1000)

* cleanup imports

* cleanup imports on all resources and tests

* Add support for `global_time_target` for SLO widgets (#1003)

* add support for global_time_target

* regenerate docs

* Use goimports instead of gofmt (#1001)

* use goimports instead of fmt

* use format-only

* [Synthetics] Prevent certificate update with wrong value (#997)

* [Synthetics] Prevent certificate update with wrong value

* [Synthetics] Improve hash detection

* [Feature] Add datadog default security monitoring rule filters (#965)

* [Feature] Add datadog default security monitoring rule filters

* review

* Add group_by block to logs_metric example (#1010)

* [datasource dashboard] Retry on 502s (#1006)

Signed-off-by: Jared Ledvina <jared@techsmix.net>

* Add `noSavingResponseBody` and `noScreenshot` fields to Synthetics (#1012)

* Update go client

* [Synthetics] Add new fields to synthetics options

* [Synthetics] Remove option

* [Synthetics] Fix cassettes

* [Synthetics] Fix missing doc

* Remove read from monitor creation (#1015)

Remove the read to improve consistency and update the state directly
instead.

* Set ForceNew to true on non-updatable GCP resource fields (#1013)

* recreate GCP resource if certain fields are updated

* lint

* re-record cassettes

* Set ForceNew to true for client_email as well

* Fix panic when converting synthetics params (#1014)

If params is empty, then we fail to parse the map and it ends up in a
panic.

* [slack integration] Set account_name in state (#1019)

* [slack integration] Set account_name in state

Signed-off-by: Jared Ledvina <jared.ledvina@datadoghq.com>

* [slack integration] Get account name from state instead of arg

Signed-off-by: Jared Ledvina <jared.ledvina@datadoghq.com>

* add test case for import

Co-authored-by: Sherzod K <sherzod.karimov@datadoghq.com>

* [Synthetics] Use new models for api tests (#1005)

* [Synthetics] Use new synthetics test objects

* [Synthetics] Use new endpoint to read api tests

* [Synthetics] Record new cassettes for new api endpoint

* Update go client

* [Synthetics] Update for correct browser test

* [Synthetics] Fix review

* [Synthetics] Fix cassette

* Fix monitor restricted_roles diff (#1011)

It shouldn't check for locked, as it's still present in the state,
and thus remove the possibility of updating it.

* Add support for multistep synthetics API tests (#1007)

* [Synthetics] Add support for multistep tests

* [Synthetics] Review fixes

* [Synthetics] Update docs

* [Synthetics] Remove comments

* [Synthetics] Small fixes for synthetics (#1020)

* [Synthetics] Small fixes for synthetics

* [Synthetics] Generate docs

* [Synthetics] Fix cassettes

* [dashboards] Formula and Function support for Timeseries Widgets in Dashboard resource (#892)

* WIP

* Finish formula and function metric_query

* Implement most of FormulaAndFunctionProcessQuery

* Update client

* Ues new fields

* Fix compute building

* Implement a few more fields for formulas and functions event query

* Add reading part

* Persist data_source, compute, name and indexes to terraform state

* Push changes for supporting search in formula and function event queries

* Retrieve the query

* Test skeleton

* Implement more queries, tests

* WIP group_by for EventQuery

* Start working on adding formulas and event_query group_by

* Update client

* Fix formula support

* Implement limit

* Implement group_by schema for formula and function event query

* Extend test suite for dashboard formula and functions

* Add descriptions to all schema values for formulas and functions

* Fix missing comma

* More linting

* Fix missing comma

* Sync changes

* Add missing functions

* Update client

* Exclude empty searches, update client

* Add default to sort for matching SDK behavior

* Merge with master

* [screenboard] Fix outdated description (#904)

Uptime widgets are no more.

* Add security-monitoring team to codeowners (#869)

* add security-monitoring team to codeownders

* commit review changes

Co-authored-by: Thomas Hervé <thomas.herve@datadoghq.com>

* remove trailing *

Co-authored-by: Thomas Hervé <thomas.herve@datadoghq.com>

* Stop using read handler in create/update (#905)

* Address PR review comments from Nick

* Move new cassettes to test folder

* Code cleanup and doc generation

* Revert doc changes for other resources

* Revert last doc change

* Replace timeseries formula and function methods with new generic names

* Merge with master, remove references to TimeSeries specific formula and function objects

* Modify test names for dashboard formulas and functions, regenerate cassettes

* Remove tests with old names

* Regenerate dashboard doc

* Remove MaxItems:1 restriction for group_by and compute properties

* Add comments for ValidateStringValue

* Fix reference to import in timeseries test

* Add enum validation for timeseries formula and functions

* Add two more validators

Co-authored-by: Thomas Hervé <thomas.herve@datadoghq.com>
Co-authored-by: Phillip Baker <59581508+phillip-dd@users.noreply.github.com>
Co-authored-by: skarimo <40482491+skarimo@users.noreply.github.com>
Co-authored-by: Hippolyte HENRY <zippolyte@users.noreply.github.com>
Co-authored-by: Nicholas Muesch <nicholas.muesch@datadoghq.com>

* Expose doc for SLO Corrections  (#1021)

* add an example for slo correction

* add generated md file

* correct a sentence

* update message

* update go client and fix slo (#1026)

* prepare release (#1027)

* Ignore widget IDs for diff on dashboard JSON resource (#1028)

* Ignore widget IDs for diff on dashboard JSON resource

* cassettes

* [dashboards] Formula and Function support for Query Value Widgets in Dashboard resource (#953)

* WIP

* Finish formula and function metric_query

* Implement most of FormulaAndFunctionProcessQuery

* Update client

* Ues new fields

* Fix compute building

* Implement a few more fields for formulas and functions event query

* Add reading part

* Persist data_source, compute, name and indexes to terraform state

* Push changes for supporting search in formula and function event queries

* Retrieve the query

* Test skeleton

* Implement more queries, tests

* WIP group_by for EventQuery

* Start working on adding formulas and event_query group_by

* Update client

* Fix formula support

* Implement limit

* Implement group_by schema for formula and function event query

* Extend test suite for dashboard formula and functions

* Add descriptions to all schema values for formulas and functions

* Fix missing comma

* More linting

* Fix missing comma

* Sync changes

* Add missing functions

* Update client

* Exclude empty searches, update client

* Add default to sort for matching SDK behavior

* Merge with master

* [screenboard] Fix outdated description (#904)

Uptime widgets are no more.

* Add security-monitoring team to codeowners (#869)

* add security-monitoring team to codeownders

* commit review changes

Co-authored-by: Thomas Hervé <thomas.herve@datadoghq.com>

* remove trailing *

Co-authored-by: Thomas Hervé <thomas.herve@datadoghq.com>

* Stop using read handler in create/update (#905)

* Address PR review comments from Nick

* Move new cassettes to test folder

* Code cleanup and doc generation

* Revert doc changes for other resources

* Revert last doc change

* Replace timeseries formula and function methods with new generic names

* Merge with master, remove references to TimeSeries specific formula and function objects

* Modify test names for dashboard formulas and functions, regenerate cassettes

* Remove tests with old names

* Regenerate dashboard doc

* Remove MaxItems:1 restriction for group_by and compute properties

* Implement formulas and functions support for query value widgets

* Implement validator for search.query

* Pull in latest validators

* Remove duplicated variable

Co-authored-by: Thomas Hervé <thomas.herve@datadoghq.com>
Co-authored-by: Phillip Baker <59581508+phillip-dd@users.noreply.github.com>
Co-authored-by: skarimo <40482491+skarimo@users.noreply.github.com>
Co-authored-by: Hippolyte HENRY <zippolyte@users.noreply.github.com>
Co-authored-by: Nicholas Muesch <nicholas.muesch@datadoghq.com>

* [msl] Add reflow_type property for dashboards (#1017)

* Add dashboard reflow_type property

* update reflow type doc

* Update dashboard doc

* [dashboards] Formula and Function support for Toplist Widgets in Dashboard resource (#951)

* WIP

* Finish formula and function metric_query

* Implement most of FormulaAndFunctionProcessQuery

* Update client

* Ues new fields

* Fix compute building

* Implement a few more fields for formulas and functions event query

* Add reading part

* Persist data_source, compute, name and indexes to terraform state

* Push changes for supporting search in formula and function event queries

* Retrieve the query

* Test skeleton

* Implement more queries, tests

* WIP group_by for EventQuery

* Start working on adding formulas and event_query group_by

* Update client

* Fix formula support

* Implement limit

* Implement group_by schema for formula and function event query

* Extend test suite for dashboard formula and functions

* Add descriptions to all schema values for formulas and functions

* Fix missing comma

* More linting

* Fix missing comma

* Sync changes

* Add missing functions

* Update client

* Exclude empty searches, update client

* Add default to sort for matching SDK behavior

* Merge with master

* [screenboard] Fix outdated description (#904)

Uptime widgets are no more.

* Add security-monitoring team to codeowners (#869)

* add security-monitoring team to codeownders

* commit review changes

Co-authored-by: Thomas Hervé <thomas.herve@datadoghq.com>

* remove trailing *

Co-authored-by: Thomas Hervé <thomas.herve@datadoghq.com>

* Stop using read handler in create/update (#905)

* Address PR review comments from Nick

* Move new cassettes to test folder

* Code cleanup and doc generation

* Revert doc changes for other resources

* Revert last doc change

* Replace timeseries formula and function methods with new generic names

* Merge with master, remove references to TimeSeries specific formula and function objects

* Modify test names for dashboard formulas and functions, regenerate cassettes

* Remove tests with old names

* Regenerate dashboard doc

* Implement ability to create toplists with formulas and functions

* Implement tests for toplist formulas and functions

* Prevent crashes when search.query is an empty string

* Fix search.query empty strings causing perpetual diffs

* Implement validator for search.query

* Pull out terraform search

* Formatting

* Remove duplicate declarations in tests

* Remove duplicate datadogDashboardFormulaConfig

* Update toplist test for new search.query syntax

* Update toplist test for new search.query syntax

* Record new cassettes for formulas and functions toplist

* Record new cassettes for classic toplist

Co-authored-by: Thomas Hervé <thomas.herve@datadoghq.com>
Co-authored-by: Phillip Baker <59581508+phillip-dd@users.noreply.github.com>
Co-authored-by: skarimo <40482491+skarimo@users.noreply.github.com>
Co-authored-by: Hippolyte HENRY <zippolyte@users.noreply.github.com>
Co-authored-by: Nicholas Muesch <nicholas.muesch@datadoghq.com>

* Fix docs with high level of nesting and mark ID attribute as read only (#1036)

* Fix docs with high level of nesting

* include ID fix

* 3rd party

* Properly mark `active`/`disabled` fields as readonly to avoid diffs (#1034)

* Properly mark `active`/`disabled` fields as readonly to avoid diffs

* fix test

* record'
'

* docs

* [monitor resource] Retry on 504 (#1038)

Signed-off-by: Jared Ledvina <jared.ledvina@datadoghq.com>

* datadog_metric_tag_configuration remove blocking tests (#1037)

* remove broken tests

* removed old cassettes

* [dashboards] Implement formulas and functions for geomap widgets (#1043)

* Implement security queries and formulas and functions for geomap widgets

* Remove security query track for geomap widgets

* Update cassettes

* Add new tests for security_signals track for geomap

* Generate new cassettes

* Generate new cassettes

* Re-record

Co-authored-by: Thomas Hervé <thomas.herve@datadoghq.com>

* Update go-client and use new api models (#1041)

* update go-client and use new api models

* Bump datadog api client to latest release

* use correct Destroy method for slack import test (#1032)

* [Synthetics] Add support for icmp tests (#1030)

* Mark AWS account as non existent if GET returns 400 when AWS integration not installed (#1047)

* Add SLO datasources (#931)

* Add SLO datasources

Closes #911

Signed-off-by: Adrien FILLON <adrien.fillon@manomano.com>

* rebase on master

* remake docs

* fix up tests

* record

Co-authored-by: Hippolyte HENRY <hippolyte.henry@datadoghq.com>

* [Dashboards] Add new properties to group widget, note widget and image widget (#1044)

* Add is_column_break layout property

* Add new group properties

* Add image new options

* Add new note properties

* Improve group background color description

* update go client

* Fix linter error

* Generate new cassettes

* make doc

* Add support for `setCookie`, `dnsServerPort`, `allowFailure` and `isCritical` fields for Synthetics tests (#1052)

* Update go client

* [Synthetics] Add support for new fields

* [Synthetics] Remove useless request options for browser tests

* [Synthetics] Regenerate docs

* Update monitor `critical` threshold documentation (#1055)

* Update monitor.md

Under the schema for `monitor_thresholds`, `critical` is listed as the "recovery threshold" rather than "threshold".

* Update schema

Co-authored-by: Thomas Hervé <thomas.herve@datadoghq.com>

* Add monitors datasource for multiple monitors (#1048)

* Add monitors datasource for multiple monitors

* docs

* review

* Use code formatting in description for attribute remapper (#1061)

* Use custom transport for HTTPClient (#1054)

* use custom transport for HTTPClient

* use response retry header for retry duration

* put http retry behind provider configuration option

* handle nil transport

* remove custom client from test provider

* add test provider client wrapper

* lint

* Handle nil responses

* add docs

* fix clientv2 transport

* Apply code review suggestions

* make docss

* fix flaky slo test

Co-authored-by: Thomas Hervé <thomas.herve@datadoghq.com>

* Use mutex to sync logs custom pipeline (#1069)

* Update Datadog api go client (#1064)

* Update go client to use latest release

Co-authored-by: Thomas Hervé <thomas.herve@datadoghq.com>

* Prepare release of 2.26.0 (#1071)

* Prepare 2.26.0 release

* fix failing tests

* make docs

* remove deprecated field options from synthetics

* re-record slo  datasource cassettes

Co-authored-by: Alexej Tessaro <rymir@users.noreply.github.com>
Co-authored-by: Alexandre Trufanow <alexandre.trufanow@datadoghq.com>
Co-authored-by: Thomas Hervé <thomas.herve@datadoghq.com>
Co-authored-by: Jared Ledvina <jared.ledvina@datadoghq.com>
Co-authored-by: Hippolyte HENRY <zippolyte@users.noreply.github.com>
Co-authored-by: Eric Fraese <eric.fraese@datadoghq.com>
Co-authored-by: Jiri Kuncar <jiri.kuncar@datadoghq.com>
Co-authored-by: Abe Rubenstein <sighrobot@users.noreply.github.com>
Co-authored-by: Romain Berger <romain.berger@datadoghq.com>
Co-authored-by: David Leonard <david.leonard@datadoghq.com>
Co-authored-by: Phillip Baker <59581508+phillip-dd@users.noreply.github.com>
Co-authored-by: Nicholas Muesch <nicholas.muesch@datadoghq.com>
Co-authored-by: Zhengshi Zhao <zhengshizhao@users.noreply.github.com>
Co-authored-by: David Robert-Ansart <david.robertansart@datadoghq.com>
Co-authored-by: Adrien F <adrien.fillon@gmail.com>
Co-authored-by: Hippolyte HENRY <hippolyte.henry@datadoghq.com>
Co-authored-by: Toj Fowler <toj@jasonfowler.com>
Co-authored-by: Murukesh Mohanan <83797086+murukesh-mohanan-paidy@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Please add datadog_service_level_objective data source to provider
4 participants