-
Notifications
You must be signed in to change notification settings - Fork 3
Revert recent change and clean up fields #48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughBumps release VERSION and applies widespread renames and schema changes: project fields ( Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60–90 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/provider/resource_openai_project_test.go (1)
81-90: Schema mismatch:usage_limitsblock not defined in resource schema.The test at lines 85-86 references
usage_limits.0.max_budgetandusage_limits.0.max_tokens, but the resource schema (lines 36-57 inresource_openai_project.go) only definesname,created_at,status, andarchived_at. Theusage_limitsblock is defined in the data source schema but is missing from the resource schema. This test will fail if executed.Either add the
usage_limitsschema definition to the resource, or remove/update this test to match the current resource schema.
🧹 Nitpick comments (4)
docs/data-sources/organization_users.md (1)
49-49: Consider adding a brief description forusers[].added_at
added_atis correctly exposed in the nested users schema, but unlike the single-user doc it lacks a description. Consider mirroring something like “The Unix timestamp when the user was added to the organization” for consistency.docs/data-sources/project.md (1)
31-32: Project data source docs align withname,created_at, andusage_limitsThe example now uses
production.name, and the Read-Only fields (created_at,name,usage_limitswith its nested limits) are coherent and match the described schema. You might optionally add short descriptions to each usage limit field (e.g., units/semantics), but the structure itself looks correct.Also applies to: 48-61
internal/provider/resource_openai_project.go (2)
17-21: Consider removing the panic recovery wrapper.This
defer/recoverpattern in a resource definition function can mask programming errors and make debugging harder. If a panic occurs here, it likely indicates a bug that should surface during development rather than being silently logged. The resource will still be returned (potentially in an invalid state) after recovering.func resourceOpenAIProject() *schema.Resource { - defer func() { - if r := recover(); r != nil { - log.Printf("[WARN] Recovered from panic in resourceOpenAIProject: %v", r) - } - }() - resource := &schema.Resource{
117-132: LGTM! Timestamp conversion is correct.The nil checks and RFC3339 formatting are appropriate. The same pattern is duplicated in
resourceOpenAIProjectImport(lines 205-226) - consider extracting a helper if this pattern expands further.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
Makefile(1 hunks)docs/data-sources/organization_user.md(1 hunks)docs/data-sources/organization_users.md(1 hunks)docs/data-sources/project.md(2 hunks)docs/data-sources/projects.md(1 hunks)docs/resources/organization_user.md(1 hunks)docs/resources/project.md(2 hunks)examples/data-sources/openai_project/data-source.tf(1 hunks)examples/resources/openai_project/resource.tf(1 hunks)internal/client/client.go(8 hunks)internal/provider/data_source_openai_organization_user.go(2 hunks)internal/provider/data_source_openai_organization_users.go(3 hunks)internal/provider/data_source_openai_project.go(3 hunks)internal/provider/data_source_openai_projects.go(2 hunks)internal/provider/resource_openai_organization_user.go(4 hunks)internal/provider/resource_openai_project.go(6 hunks)internal/provider/resource_openai_project_test.go(4 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
internal/provider/data_source_openai_organization_users.go (2)
internal/provider/resource_openai_project_user.go (1)
resourceOpenAIProjectUser(18-58)internal/provider/data_source_openai_project_users.go (1)
dataSourceOpenAIProjectUsers(14-83)
internal/provider/data_source_openai_organization_user.go (2)
internal/provider/data_source_openai_project_user.go (1)
dataSourceOpenAIProjectUserRead(139-218)internal/provider/resource_openai_project_user.go (1)
resourceOpenAIProjectUser(18-58)
internal/provider/data_source_openai_projects.go (2)
internal/provider/data_source_openai_invites.go (1)
dataSourceOpenAIInvites(17-81)internal/provider/data_source_openai_project_users.go (2)
dataSourceOpenAIProjectUsers(14-83)dataSourceOpenAIProjectUsersRead(87-184)
internal/client/client.go (4)
internal/provider/provider.go (1)
OpenAIClient(27-31)internal/provider/resource_openai_fine_tuning_job.go (1)
Project(47-51)internal/provider/resource_openai_project_user.go (2)
resourceOpenAIProjectUser(18-58)resourceOpenAIProjectUserUpdate(259-348)internal/provider/data_source_openai_project_users.go (1)
dataSourceOpenAIProjectUsers(14-83)
🔇 Additional comments (33)
Makefile (1)
10-10: Version bump to 1.2.0 looks goodRelease and install paths will consistently pick up the final version; no issues spotted here.
examples/data-sources/openai_project/data-source.tf (1)
16-16: Example now correctly usesproject.nameThe local
project_namereferencingdata.openai_project.production.namealigns with the documented schema; looks consistent.internal/provider/data_source_openai_organization_user.go (1)
41-45:added_atfield wiring is consistent and clearAdding
added_atas a computed Unix timestamp and populating it fromuser.AddedAtin the read path matches the docs and related resources; no functional concerns.Also applies to: 117-118
docs/data-sources/organization_user.md (1)
25-25: Doc updated toadded_atmatches data source schemaThe Read-Only
added_atfield description lines up with the Go schema and behavior.docs/resources/organization_user.md (1)
44-44: Resource docadded_ataligns with schemaThe Read-Only
added_atfield and description are consistent with the resource and related data source.examples/resources/openai_project/resource.tf (1)
3-3: Examples updated to usenameattribute correctlyUsing
nameon theopenai_projectresources matches the current resource schema and docs; examples look good.Also applies to: 8-8
internal/provider/resource_openai_organization_user.go (2)
49-53: LGTM! Field rename aligns with API revert.The schema change from
createdtoadded_atis consistent with the OpenAI API rollback mentioned in the PR objectives. The field type (TypeInt) and computed nature remain appropriate for a Unix timestamp.
101-103: LGTM! Consistent field population across CRUD operations.The
added_atfield is correctly populated fromuser.AddedAtin Create, Read, and Update operations with proper error handling.Also applies to: 144-146, 176-178
internal/provider/data_source_openai_organization_users.go (2)
61-65: LGTM! Schema field rename consistent with resource.The
added_atfield definition matches the corresponding resource schema and the project users data source pattern shown in the relevant code snippets.
103-108: LGTM! Both read paths correctly populateadded_at.The single-user retrieval (lines 103-108) and paginated list (lines 147-155) paths consistently populate
added_atfromuser.AddedAt, maintaining parity between both code paths.Also applies to: 147-155
internal/provider/resource_openai_project.go (3)
37-56: LGTM! Schema changes align with API revert.The field renames (
title→name) and timestamp type changes (TypeInt→TypeStringwith RFC3339 formatting) correctly reflect the OpenAI API rollback. Note this is a breaking change for existing Terraform state, but appropriate given the context.
70-83: LGTM! Create operation correctly uses the renamednamefield.
146-157: LGTM! Update operation correctly uses the renamednamefield.The comment about POST vs PATCH is helpful documentation for maintainers.
docs/resources/project.md (2)
15-31: LGTM! Example usage updated to reflect schema changes.The examples correctly demonstrate using the
namefield instead of the previoustitlefield.
36-49: LGTM! Schema documentation accurately reflects the changes.The required
namefield and the String-typed timestamp fields (archived_at,created_at) are correctly documented.internal/provider/resource_openai_project_test.go (3)
23-28: LGTM! Test configurations updated to use simplified helper.The test steps correctly use the updated
testAccResourceOpenAIProjectBasichelper with only thenameparameter.Also applies to: 52-56, 59-63
135-141: LGTM! Helper function simplified to match new schema.The
testAccResourceOpenAIProjectBasichelper correctly generates Terraform config with only thenameattribute.
143-154: Same concern:usage_limitsblock may not exist in schema.This helper generates config with a
usage_limitsblock that may not be defined in the resource schema. See earlier comment about potential schema mismatch.docs/data-sources/projects.md (1)
45-48: Documentation correctly reflects the schema changes.The field renames (
created→created_at,title→name) align with the implementation changes indata_source_openai_projects.goand the API rollback mentioned in the PR description.internal/provider/data_source_openai_projects.go (2)
48-62: Schema changes correctly align with the API response structure.The field renames (
name,created_at) and removals are consistent with theProjectResponsestruct defined indata_source_openai_project.go. TheTypeIntforcreated_atcorrectly matches theCreatedAt intfield type.
197-204: Data mapping correctly populates the renamed fields.The project map keys (
name,created_at) match the schema definition, and the values are properly sourced from theProjectResponsestruct fields.internal/provider/data_source_openai_project.go (3)
16-30: Type definitions align with the new API response format.The
ProjectResponsestruct withName,CreatedAt, and nestedUsageLimitscorrectly models the reverted API response. The JSON tags match the expected field names.Note: This file defines its own
ProjectResponsetype (used for the single-project data source), whileinternal/client/client.godefines a separateProjecttype (used for CRUD operations). Both have been updated consistently withNameandCreatedAtfields.
64-87: Newusage_limitsschema correctly exposes project limits.The nested schema properly models the
UsageLimitsstruct with appropriate types (TypeFloatformax_monthly_dollars,TypeIntfor the others). This adds useful visibility into project usage constraints.
169-189: Data population correctly handles the new schema structure.The field assignments properly map the API response to Terraform state. The
usage_limitsis correctly populated as a single-element slice, which is the expected pattern forTypeListwith a nestedResourceelement.internal/client/client.go (9)
144-156: Project struct updated with expanded fields and renames.The struct correctly reflects the reverted API format with
NamereplacingTitleandCreatedAtreplacingCreated. The addition ofOrganizationID,IsDefault,BillingMode,APIKeys,RateLimits, andUsersfields expands the model to support more project details.Note:
CreatedAtis*int64here (nullable), whileProjectResponseindata_source_openai_project.gousesint. This difference is intentional since the provider'sProjectResponseis used for API responses wherecreated_atis always present, while thisProjectstruct is used for operations where the field might be optional.
198-202: CreateProjectRequest correctly usesnamefield.The request struct aligns with the reverted API that expects
nameinstead oftitle.
569-577: User struct correctly renamesCreatedtoAddedAt.This aligns with the
added_atfield used in thedata_source_openai_project_users.goschema (visible in the relevant code snippets at line 43). The naming is now consistent across the codebase.
579-586: UsersResponse simplified to useUsertype directly.The
Datafield now uses[]Userinstead of a separateOrganizationUsertype, reducing type duplication.
662-672: FindUserByEmail correctly returns*Usertype.The method now returns the unified
Usertype instead of a separate wrapper type.
693-700: GetUser correctly unmarshals intoUsertype.The method properly parses the API response into the updated
Userstruct.
721-728: UpdateUserRole correctly returns*Usertype.The method properly handles the updated
Userstruct in its response.
1077-1086: CreateProject correctly usesnameparameter.The method signature, request body, and debug logging all consistently use
name, aligning with the reverted API format.
1132-1137: UpdateProject correctly usesnameparameter.The method signature and request body consistently use
name, matching the reverted API format.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
modules/projects/README.md (1)
82-90: Verifyorganization_idis actually defined and wired in the moduleREADME now documents an
organization_idinput, but the variables summary only mentions removals and doesn’t clearly show a matching variable block or its usage in the module. Please double‑check that:
modules/projects/variables.tfdefinesvariable "organization_id"with the documented default, and- the module actually passes it through to the provider/client where needed.
Otherwise this row will be misleading and consumers won’t be able to set it.
internal/client/client.go (1)
556-573: User model + user APIs re-aligned; double‑checkadded_atand query paramsThe new
User/UsersResponsetypes and the following methods all look logically sound:
ListUsersbuildsafter,limit, and repeatedemailsquery params, unmarshalling intoUsersResponse{Data []User}.FindUserByEmailcallsListUsers("", 1, []string{email})and returns the first result after a case‑insensitive equality check, which is safe (copy to local +&user).GetUserandUpdateUserRolenow unmarshal directly intoUserand return*User.User.AddedAtmodels the timestamp via theadded_atJSON field.Two things worth explicitly verifying against the actual org users API:
- The list/get/update responses really expose
added_at(and notcreatedor another name) and match theUserstruct shape.- The list endpoint accepts
emailsas a repeated query parameter exactly as used here.If those assumptions hold (and the provider tests around organization users pass), this refactor removes indirection cleanly and should behave well.
Also applies to: 575-623, 634-659, 662-687, 690-715
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
internal/client/client.go(7 hunks)modules/projects/README.md(1 hunks)modules/projects/main.tf(1 hunks)modules/projects/variables.tf(0 hunks)
💤 Files with no reviewable changes (1)
- modules/projects/variables.tf
🧰 Additional context used
🧬 Code graph analysis (1)
internal/client/client.go (1)
internal/provider/provider.go (1)
OpenAIClient(27-31)
🔇 Additional comments (2)
modules/projects/main.tf (1)
12-15: Resource now correctly usesnameinstead oftitleThe
openai_projectresource wiring toname = var.nameis consistent with the rest of the project/docs changes and the outputs (id, name, created_at), with no obvious correctness issues.internal/client/client.go (1)
142-150: Project model and create/update payloads now usenameandcreated_atThe
Projectstruct andCreateProject/UpdateProjectnow rely on:
- JSON field
name(notitle), and- JSON field
created_aton the response struct.The request bodies send
{ "name": <name> }to/v1/organization/projectsand/v1/organization/projects/{id}and unmarshal back into this struct, which is internally consistent.Given the PR’s goal (“revert recent change and clean up fields”) and the recent churn in the upstream API, please re‑confirm against the current projects endpoint that:
- the server indeed expects
name(nottitle) in the request body, and- the response still returns
created_at(or that the tag matches whatever field is actually emitted).If your acceptance tests against the live admin API pass for create/get/update/delete, this should be safe to ship.
Also applies to: 1064-1094, 1119-1145
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
internal/provider/data_source_openai_project.go (1)
89-92: Remove debug print statements from production code.These
fmt.Printfdebug statements will output to stdout in production. Use thetflogpackage for proper Terraform provider logging that respects log level settings.- fmt.Printf("Getting project with ID: %s\n", projectID) - fmt.Printf("Using URL: %s\n", strings.Replace(url, client.APIURL, "", 1)) - fmt.Printf("OpenAI client config: API URL=%s, Organization ID=%s\n", client.APIURL, client.OrganizationID) - fmt.Printf("Making API request: GET %s\n", url) + tflog.Debug(ctx, "Getting project", map[string]interface{}{ + "project_id": projectID, + "url": strings.Replace(url, client.APIURL, "", 1), + })internal/client/client.go (1)
2457-2461: Corrupted model name key in defaultRateLimits map.The key
"ft:\t-0613"appears to be corrupted - the\tis a tab character. This should likely be"ft:gpt-4-0613"or a similar model name.- "ft: -0613": { + "ft:gpt-4-0613": { MaxRequestsPer1Minute: 10000, MaxTokensPer1Minute: 300000, Batch1DayMaxInputTokens: 30000000, },
🧹 Nitpick comments (4)
internal/provider/data_source_openai_project.go (1)
16-22: Consider usingint64for timestamp field.The
CreatedAtfield is typed asintwhich may cause issues on 32-bit systems or for timestamps beyond 2038. Unix timestamps should typically useint64for future-proofing.type ProjectResponse struct { ID string `json:"id"` Object string `json:"object"` Name string `json:"name"` - CreatedAt int `json:"created_at"` + CreatedAt int64 `json:"created_at"` Status string `json:"status"` }internal/client/client.go (1)
815-858: Excessive debug logging in production code.The
doRequestfunction contains extensive debug logging withfmt.Printfstatements that will output to stdout in production. Consider using a structured logging library with configurable log levels, or remove these debug statements for production code.This extensive debug logging significantly increases noise and may expose sensitive information. Consider:
- Using environment variable-controlled debug mode
- Removing most of these statements for production
- Using a proper logging library with log levels
internal/provider/resource_openai_rate_limit.go (2)
587-601: Set errors are silently ignored.The
setRateLimitStatehelper function uses blank identifier_to ignore errors fromd.Set()calls. While these rarely fail, silently ignoring them can mask issues. Consider logging or accumulating errors.func setRateLimitState(d *schema.ResourceData, rateLimit *client.RateLimit) { // Set all values from the rate limit object to state - _ = d.Set("model", rateLimit.Model) - _ = d.Set("max_requests_per_minute", rateLimit.MaxRequestsPer1Minute) - _ = d.Set("max_tokens_per_minute", rateLimit.MaxTokensPer1Minute) - _ = d.Set("max_images_per_minute", rateLimit.MaxImagesPer1Minute) - _ = d.Set("batch_1_day_max_input_tokens", rateLimit.Batch1DayMaxInputTokens) - _ = d.Set("max_audio_megabytes_per_1_minute", rateLimit.MaxAudioMegabytesPer1Minute) - _ = d.Set("max_requests_per_1_day", rateLimit.MaxRequestsPer1Day) + d.Set("model", rateLimit.Model) + d.Set("max_requests_per_minute", rateLimit.MaxRequestsPer1Minute) + d.Set("max_tokens_per_minute", rateLimit.MaxTokensPer1Minute) + d.Set("max_images_per_minute", rateLimit.MaxImagesPer1Minute) + d.Set("batch_1_day_max_input_tokens", rateLimit.Batch1DayMaxInputTokens) + d.Set("max_audio_megabytes_per_1_minute", rateLimit.MaxAudioMegabytesPer1Minute) + d.Set("max_requests_per_1_day", rateLimit.MaxRequestsPer1Day) // Keep the current ID if rateLimit.ID != "" { d.SetId(rateLimit.ID) } }
604-611: Remove unused functiongetAPIKeyType.This function is not called anywhere in the codebase and can be safely removed to reduce dead code.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
docs/data-sources/project.md(2 hunks)docs/resources/rate_limit.md(1 hunks)examples/resources/openai_rate_limit/resource.tf(1 hunks)internal/client/client.go(10 hunks)internal/provider/data_source_openai_project.go(3 hunks)internal/provider/resource_openai_project_test.go(3 hunks)internal/provider/resource_openai_rate_limit.go(2 hunks)modules/projects/README.md(1 hunks)modules/projects/outputs.tf(0 hunks)
💤 Files with no reviewable changes (1)
- modules/projects/outputs.tf
✅ Files skipped from review due to trivial changes (1)
- docs/resources/rate_limit.md
🧰 Additional context used
🧬 Code graph analysis (2)
internal/provider/resource_openai_rate_limit.go (2)
internal/client/client.go (1)
Error(192-196)internal/provider/provider.go (1)
GetOpenAIClientWithAdminKey(83-110)
internal/client/client.go (1)
internal/provider/provider.go (1)
OpenAIClient(27-31)
🔇 Additional comments (9)
modules/projects/README.md (1)
89-89: LGTM!The addition of
organization_idas an optional input with an empty default value is appropriate for the module's public interface.internal/provider/resource_openai_project_test.go (1)
111-116: LGTM!The test helper function correctly reflects the simplified resource configuration with only the
nameattribute, consistent with the resource schema changes.internal/provider/data_source_openai_project.go (1)
41-55: LGTM on field renames.The schema correctly reflects the API field renames (
nameandcreated_at), with appropriate types and descriptions.internal/client/client.go (2)
556-564: LGTM on User struct changes.The simplified
Userstruct withAddedAtfield aligns with the API changes and removes the unnecessary intermediate type conversions.
1064-1072: LGTM on CreateProject signature change.The function now correctly uses
nameinstead oftitle, matching the reverted API format.docs/data-sources/project.md (1)
31-32: LGTM!Documentation correctly reflects the field renames (
nameandcreated_at) and provides accurate example usage.Also applies to: 48-50
examples/resources/openai_rate_limit/resource.tf (1)
1-37: LGTM!The examples comprehensively demonstrate different rate limit configurations including per-model limits, daily limits, image limits, and batch processing limits. The
ignore_rate_limit_warningusage in the batch example is helpful for users understanding the rate limit lifecycle behavior.internal/provider/resource_openai_rate_limit.go (2)
617-637: LGTM on simplified delete logic.The delete function correctly handles the OpenAI constraint that rate limits cannot be truly deleted. The graceful handling of permission errors (clearing state instead of failing) is appropriate.
292-316: LGTM on read fallback logic.The read path correctly attempts to fetch by model name first, then falls back to the rate limit ID, with appropriate error handling for not-found and permission scenarios.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
internal/provider/resource_openai_rate_limit.go (2)
298-306: State drift risk with permission error handling.Returning a warning while preserving state on permission errors could lead to state drift if the actual API values differ from Terraform state. Users might make decisions based on stale data. However, this graceful degradation appears intentional and is consistent with similar handling throughout the resource.
Consider documenting this behavior in the resource description to set user expectations that with insufficient permissions, state may not reflect actual API values.
623-627: Behavioral asymmetry in permission error handling.Delete returns
niland clears state on permission errors (line 626), while Create/Update operations return warnings and preserve state. This means a resource may remain in OpenAI after Terraform believes it was deleted. While this allows Terraform operations to proceed despite permission constraints, the inconsistency could confuse users.Consider documenting this behavior or making the permission error handling more consistent across CRUD operations. If the current behavior is intentional, add a comment explaining why Delete differs from Create/Update.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
docs/resources/rate_limit.md(1 hunks)examples/resources/openai_rate_limit/resource.tf(1 hunks)internal/provider/resource_openai_rate_limit.go(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- examples/resources/openai_rate_limit/resource.tf
🧰 Additional context used
🧬 Code graph analysis (1)
internal/provider/resource_openai_rate_limit.go (2)
internal/client/client.go (1)
Error(192-196)internal/provider/provider.go (1)
GetOpenAIClientWithAdminKey(83-110)
🔇 Additional comments (2)
docs/resources/rate_limit.md (1)
13-50: Comprehensive examples added.The Example Usage section demonstrates various rate limit configurations effectively, covering language models, image generation, and batch processing scenarios with appropriate field usage.
internal/provider/resource_openai_rate_limit.go (1)
312-339: LGTM!State management is correct with consistent error handling for all field assignments.
OAI confirmed API format change was accidental and rolled back
Summary by CodeRabbit
Chores
Documentation
Examples
✏️ Tip: You can customize this high-level summary in your review settings.