Skip to content

Conversation

@Fodoj
Copy link
Contributor

@Fodoj Fodoj commented Dec 4, 2025

OAI confirmed API format change was accidental and rolled back

Summary by CodeRabbit

  • Chores

    • Release version set to 1.2.0.
    • Module inputs removed: is_default, rate_limits, users; organization_id added. project_usage_limits output removed.
  • Documentation

    • Project fields renamed: title → name; created → created_at (timestamp format updated). Organization user timestamp renamed: created → added_at.
    • Rate limit docs updated and examples added; ignore_rate_limit_warning removed.
  • Examples

    • Added example rate-limit configurations and updated project examples to use name.

✏️ Tip: You can customize this high-level summary in your review settings.

@Fodoj Fodoj self-assigned this Dec 4, 2025
@coderabbitai
Copy link

coderabbitai bot commented Dec 4, 2025

Walkthrough

Bumps release VERSION and applies widespread renames and schema changes: project fields (titlename, createdcreated_at), organization user timestamp (createdadded_at), client model refactors, project timestamp type/format changes, module inputs/outputs removals, and matching docs/examples/tests updates.

Changes

Cohort / File(s) Summary
Version Bump
Makefile
VERSION changed from 1.2.0-rc.11.2.0.
Docs — Organization User
docs/data-sources/organization_user.md, docs/data-sources/organization_users.md, docs/resources/organization_user.md
Field renamed: createdadded_at (Unix timestamp when user was added).
Docs — Project
docs/data-sources/project.md, docs/data-sources/projects.md, docs/resources/project.md
Renames: titlename, createdcreated_at; removed archived_at and is_initial; adjusted timestamp types/descriptions.
Examples
examples/data-sources/openai_project/data-source.tf, examples/resources/openai_project/resource.tf, examples/resources/openai_rate_limit/resource.tf
Locals/resources updated to use name instead of title; added example openai_rate_limit resources.
Client — Models & API
internal/client/client.go
Project/User model reshapes (name, created_at, organization_id; User with added_at), Create/Update project signatures now use name; DoRequest sets OpenAI-Organization header when present; rate-limit API URL/lookup logic refactored.
Provider — Data Sources (Organization Users)
internal/provider/data_source_openai_organization_user.go, internal/provider/data_source_openai_organization_users.go
Schema and population logic: replace created with added_at (int) and populate from user.AddedAt for single and list flows.
Provider — Data Sources (Projects)
internal/provider/data_source_openai_project.go, internal/provider/data_source_openai_projects.go
Schema updates: titlename, createdcreated_at; removed archived_at and is_initial; read mapping uses project.Name/project.CreatedAt.
Provider — Resources (Organization User & Project) and Tests
internal/provider/resource_openai_organization_user.go, internal/provider/resource_openai_project.go, internal/provider/resource_openai_project_test.go
Organization user resource: createdadded_at. Project resource: titlename, timestamps converted/formatted (ints → RFC3339 strings for created_at/archived_at), is_initial removed; tests updated to use name and remove description/usage_limits assertions.
Provider — Rate Limit Resource
internal/provider/resource_openai_rate_limit.go
Removed ignore_rate_limit_warning field; simplified read/create/update/delete/import flows; read looks up by model then ID; permission errors produce warnings with state adjustments.
Modules — projects
modules/projects/variables.tf, modules/projects/main.tf, modules/projects/README.md, modules/projects/outputs.tf
Removed module inputs: is_default, rate_limits, users; added organization_id input; removed is_default and project_usage_limits outputs.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60–90 minutes

  • Areas needing focused review:
    • internal/client/client.go — model reshapes, API URL and header behavior, rate-limit lookup/update/delete logic.
    • internal/provider/resource_openai_project.go — CRUD logic, timestamp formatting changes (int → RFC3339 strings), import/read consistency.
    • Cross-cutting consistency for renamed fields (titlename, createdcreated_at, createdadded_at) across provider schemas, data sources, docs, examples, and tests.
    • modules/projects changes that remove inputs/outputs and their downstream references.

Possibly related PRs

  • Fix breaking APIs #45 — Overlapping schema and client/provider model renames (title↔name, created↔added_at).
  • Version bump #47 — Related Makefile VERSION changes around the same release.
  • Release 1.1.1 #27 — Earlier version-bump PR touching the same Makefile VERSION variable.

Poem

🐰 I hopped through fields both new and old,

title turned name, timestamps retold.
Users show added_at, projects keep new time,
Docs, tests and examples — all snug in a rhyme.
Version bumped, carrots popped — release-time chime! 🥕✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Description check ⚠️ Warning The description 'OAI confirmed API format change was accidental and rolled back' is vague and lacks detail required by the template. It misses the structured sections including issue reference, type of change, testing information, and the required checklist. Fill in the required template sections: add issue number, select breaking change type, describe testing performed, and complete the checklist to provide comprehensive context.
Title check ❓ Inconclusive The title 'Revert recent change and clean up fields' is vague and generic, using non-descriptive terms that don't convey meaningful information about what specific changes are being reverted. Provide a more specific title that clearly indicates the scope of the revert (e.g., 'Revert API field changes to original format' or 'Revert project/user schema field names').
✅ Passed checks (1 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 83.33% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch revert

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/provider/resource_openai_project_test.go (1)

81-90: Schema mismatch: usage_limits block not defined in resource schema.

The test at lines 85-86 references usage_limits.0.max_budget and usage_limits.0.max_tokens, but the resource schema (lines 36-57 in resource_openai_project.go) only defines name, created_at, status, and archived_at. The usage_limits block is defined in the data source schema but is missing from the resource schema. This test will fail if executed.

Either add the usage_limits schema definition to the resource, or remove/update this test to match the current resource schema.

🧹 Nitpick comments (4)
docs/data-sources/organization_users.md (1)

49-49: Consider adding a brief description for users[].added_at

added_at is correctly exposed in the nested users schema, but unlike the single-user doc it lacks a description. Consider mirroring something like “The Unix timestamp when the user was added to the organization” for consistency.

docs/data-sources/project.md (1)

31-32: Project data source docs align with name, created_at, and usage_limits

The example now uses production.name, and the Read-Only fields (created_at, name, usage_limits with its nested limits) are coherent and match the described schema. You might optionally add short descriptions to each usage limit field (e.g., units/semantics), but the structure itself looks correct.

Also applies to: 48-61

internal/provider/resource_openai_project.go (2)

17-21: Consider removing the panic recovery wrapper.

This defer/recover pattern in a resource definition function can mask programming errors and make debugging harder. If a panic occurs here, it likely indicates a bug that should surface during development rather than being silently logged. The resource will still be returned (potentially in an invalid state) after recovering.

 func resourceOpenAIProject() *schema.Resource {
-	defer func() {
-		if r := recover(); r != nil {
-			log.Printf("[WARN] Recovered from panic in resourceOpenAIProject: %v", r)
-		}
-	}()
-
 	resource := &schema.Resource{

117-132: LGTM! Timestamp conversion is correct.

The nil checks and RFC3339 formatting are appropriate. The same pattern is duplicated in resourceOpenAIProjectImport (lines 205-226) - consider extracting a helper if this pattern expands further.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 940d9d3 and 94c7a8f.

📒 Files selected for processing (17)
  • Makefile (1 hunks)
  • docs/data-sources/organization_user.md (1 hunks)
  • docs/data-sources/organization_users.md (1 hunks)
  • docs/data-sources/project.md (2 hunks)
  • docs/data-sources/projects.md (1 hunks)
  • docs/resources/organization_user.md (1 hunks)
  • docs/resources/project.md (2 hunks)
  • examples/data-sources/openai_project/data-source.tf (1 hunks)
  • examples/resources/openai_project/resource.tf (1 hunks)
  • internal/client/client.go (8 hunks)
  • internal/provider/data_source_openai_organization_user.go (2 hunks)
  • internal/provider/data_source_openai_organization_users.go (3 hunks)
  • internal/provider/data_source_openai_project.go (3 hunks)
  • internal/provider/data_source_openai_projects.go (2 hunks)
  • internal/provider/resource_openai_organization_user.go (4 hunks)
  • internal/provider/resource_openai_project.go (6 hunks)
  • internal/provider/resource_openai_project_test.go (4 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
internal/provider/data_source_openai_organization_users.go (2)
internal/provider/resource_openai_project_user.go (1)
  • resourceOpenAIProjectUser (18-58)
internal/provider/data_source_openai_project_users.go (1)
  • dataSourceOpenAIProjectUsers (14-83)
internal/provider/data_source_openai_organization_user.go (2)
internal/provider/data_source_openai_project_user.go (1)
  • dataSourceOpenAIProjectUserRead (139-218)
internal/provider/resource_openai_project_user.go (1)
  • resourceOpenAIProjectUser (18-58)
internal/provider/data_source_openai_projects.go (2)
internal/provider/data_source_openai_invites.go (1)
  • dataSourceOpenAIInvites (17-81)
internal/provider/data_source_openai_project_users.go (2)
  • dataSourceOpenAIProjectUsers (14-83)
  • dataSourceOpenAIProjectUsersRead (87-184)
internal/client/client.go (4)
internal/provider/provider.go (1)
  • OpenAIClient (27-31)
internal/provider/resource_openai_fine_tuning_job.go (1)
  • Project (47-51)
internal/provider/resource_openai_project_user.go (2)
  • resourceOpenAIProjectUser (18-58)
  • resourceOpenAIProjectUserUpdate (259-348)
internal/provider/data_source_openai_project_users.go (1)
  • dataSourceOpenAIProjectUsers (14-83)
🔇 Additional comments (33)
Makefile (1)

10-10: Version bump to 1.2.0 looks good

Release and install paths will consistently pick up the final version; no issues spotted here.

examples/data-sources/openai_project/data-source.tf (1)

16-16: Example now correctly uses project.name

The local project_name referencing data.openai_project.production.name aligns with the documented schema; looks consistent.

internal/provider/data_source_openai_organization_user.go (1)

41-45: added_at field wiring is consistent and clear

Adding added_at as a computed Unix timestamp and populating it from user.AddedAt in the read path matches the docs and related resources; no functional concerns.

Also applies to: 117-118

docs/data-sources/organization_user.md (1)

25-25: Doc updated to added_at matches data source schema

The Read-Only added_at field description lines up with the Go schema and behavior.

docs/resources/organization_user.md (1)

44-44: Resource doc added_at aligns with schema

The Read-Only added_at field and description are consistent with the resource and related data source.

examples/resources/openai_project/resource.tf (1)

3-3: Examples updated to use name attribute correctly

Using name on the openai_project resources matches the current resource schema and docs; examples look good.

Also applies to: 8-8

internal/provider/resource_openai_organization_user.go (2)

49-53: LGTM! Field rename aligns with API revert.

The schema change from created to added_at is consistent with the OpenAI API rollback mentioned in the PR objectives. The field type (TypeInt) and computed nature remain appropriate for a Unix timestamp.


101-103: LGTM! Consistent field population across CRUD operations.

The added_at field is correctly populated from user.AddedAt in Create, Read, and Update operations with proper error handling.

Also applies to: 144-146, 176-178

internal/provider/data_source_openai_organization_users.go (2)

61-65: LGTM! Schema field rename consistent with resource.

The added_at field definition matches the corresponding resource schema and the project users data source pattern shown in the relevant code snippets.


103-108: LGTM! Both read paths correctly populate added_at.

The single-user retrieval (lines 103-108) and paginated list (lines 147-155) paths consistently populate added_at from user.AddedAt, maintaining parity between both code paths.

Also applies to: 147-155

internal/provider/resource_openai_project.go (3)

37-56: LGTM! Schema changes align with API revert.

The field renames (titlename) and timestamp type changes (TypeIntTypeString with RFC3339 formatting) correctly reflect the OpenAI API rollback. Note this is a breaking change for existing Terraform state, but appropriate given the context.


70-83: LGTM! Create operation correctly uses the renamed name field.


146-157: LGTM! Update operation correctly uses the renamed name field.

The comment about POST vs PATCH is helpful documentation for maintainers.

docs/resources/project.md (2)

15-31: LGTM! Example usage updated to reflect schema changes.

The examples correctly demonstrate using the name field instead of the previous title field.


36-49: LGTM! Schema documentation accurately reflects the changes.

The required name field and the String-typed timestamp fields (archived_at, created_at) are correctly documented.

internal/provider/resource_openai_project_test.go (3)

23-28: LGTM! Test configurations updated to use simplified helper.

The test steps correctly use the updated testAccResourceOpenAIProjectBasic helper with only the name parameter.

Also applies to: 52-56, 59-63


135-141: LGTM! Helper function simplified to match new schema.

The testAccResourceOpenAIProjectBasic helper correctly generates Terraform config with only the name attribute.


143-154: Same concern: usage_limits block may not exist in schema.

This helper generates config with a usage_limits block that may not be defined in the resource schema. See earlier comment about potential schema mismatch.

docs/data-sources/projects.md (1)

45-48: Documentation correctly reflects the schema changes.

The field renames (createdcreated_at, titlename) align with the implementation changes in data_source_openai_projects.go and the API rollback mentioned in the PR description.

internal/provider/data_source_openai_projects.go (2)

48-62: Schema changes correctly align with the API response structure.

The field renames (name, created_at) and removals are consistent with the ProjectResponse struct defined in data_source_openai_project.go. The TypeInt for created_at correctly matches the CreatedAt int field type.


197-204: Data mapping correctly populates the renamed fields.

The project map keys (name, created_at) match the schema definition, and the values are properly sourced from the ProjectResponse struct fields.

internal/provider/data_source_openai_project.go (3)

16-30: Type definitions align with the new API response format.

The ProjectResponse struct with Name, CreatedAt, and nested UsageLimits correctly models the reverted API response. The JSON tags match the expected field names.

Note: This file defines its own ProjectResponse type (used for the single-project data source), while internal/client/client.go defines a separate Project type (used for CRUD operations). Both have been updated consistently with Name and CreatedAt fields.


64-87: New usage_limits schema correctly exposes project limits.

The nested schema properly models the UsageLimits struct with appropriate types (TypeFloat for max_monthly_dollars, TypeInt for the others). This adds useful visibility into project usage constraints.


169-189: Data population correctly handles the new schema structure.

The field assignments properly map the API response to Terraform state. The usage_limits is correctly populated as a single-element slice, which is the expected pattern for TypeList with a nested Resource element.

internal/client/client.go (9)

144-156: Project struct updated with expanded fields and renames.

The struct correctly reflects the reverted API format with Name replacing Title and CreatedAt replacing Created. The addition of OrganizationID, IsDefault, BillingMode, APIKeys, RateLimits, and Users fields expands the model to support more project details.

Note: CreatedAt is *int64 here (nullable), while ProjectResponse in data_source_openai_project.go uses int. This difference is intentional since the provider's ProjectResponse is used for API responses where created_at is always present, while this Project struct is used for operations where the field might be optional.


198-202: CreateProjectRequest correctly uses name field.

The request struct aligns with the reverted API that expects name instead of title.


569-577: User struct correctly renames Created to AddedAt.

This aligns with the added_at field used in the data_source_openai_project_users.go schema (visible in the relevant code snippets at line 43). The naming is now consistent across the codebase.


579-586: UsersResponse simplified to use User type directly.

The Data field now uses []User instead of a separate OrganizationUser type, reducing type duplication.


662-672: FindUserByEmail correctly returns *User type.

The method now returns the unified User type instead of a separate wrapper type.


693-700: GetUser correctly unmarshals into User type.

The method properly parses the API response into the updated User struct.


721-728: UpdateUserRole correctly returns *User type.

The method properly handles the updated User struct in its response.


1077-1086: CreateProject correctly uses name parameter.

The method signature, request body, and debug logging all consistently use name, aligning with the reverted API format.


1132-1137: UpdateProject correctly uses name parameter.

The method signature and request body consistently use name, matching the reverted API format.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
modules/projects/README.md (1)

82-90: Verify organization_id is actually defined and wired in the module

README now documents an organization_id input, but the variables summary only mentions removals and doesn’t clearly show a matching variable block or its usage in the module. Please double‑check that:

  • modules/projects/variables.tf defines variable "organization_id" with the documented default, and
  • the module actually passes it through to the provider/client where needed.

Otherwise this row will be misleading and consumers won’t be able to set it.

internal/client/client.go (1)

556-573: User model + user APIs re-aligned; double‑check added_at and query params

The new User / UsersResponse types and the following methods all look logically sound:

  • ListUsers builds after, limit, and repeated emails query params, unmarshalling into UsersResponse{Data []User}.
  • FindUserByEmail calls ListUsers("", 1, []string{email}) and returns the first result after a case‑insensitive equality check, which is safe (copy to local + &user).
  • GetUser and UpdateUserRole now unmarshal directly into User and return *User.
  • User.AddedAt models the timestamp via the added_at JSON field.

Two things worth explicitly verifying against the actual org users API:

  • The list/get/update responses really expose added_at (and not created or another name) and match the User struct shape.
  • The list endpoint accepts emails as a repeated query parameter exactly as used here.

If those assumptions hold (and the provider tests around organization users pass), this refactor removes indirection cleanly and should behave well.

Also applies to: 575-623, 634-659, 662-687, 690-715

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 94c7a8f and 46ff61c.

📒 Files selected for processing (4)
  • internal/client/client.go (7 hunks)
  • modules/projects/README.md (1 hunks)
  • modules/projects/main.tf (1 hunks)
  • modules/projects/variables.tf (0 hunks)
💤 Files with no reviewable changes (1)
  • modules/projects/variables.tf
🧰 Additional context used
🧬 Code graph analysis (1)
internal/client/client.go (1)
internal/provider/provider.go (1)
  • OpenAIClient (27-31)
🔇 Additional comments (2)
modules/projects/main.tf (1)

12-15: Resource now correctly uses name instead of title

The openai_project resource wiring to name = var.name is consistent with the rest of the project/docs changes and the outputs (id, name, created_at), with no obvious correctness issues.

internal/client/client.go (1)

142-150: Project model and create/update payloads now use name and created_at

The Project struct and CreateProject/UpdateProject now rely on:

  • JSON field name (no title), and
  • JSON field created_at on the response struct.

The request bodies send { "name": <name> } to /v1/organization/projects and /v1/organization/projects/{id} and unmarshal back into this struct, which is internally consistent.

Given the PR’s goal (“revert recent change and clean up fields”) and the recent churn in the upstream API, please re‑confirm against the current projects endpoint that:

  • the server indeed expects name (not title) in the request body, and
  • the response still returns created_at (or that the tag matches whatever field is actually emitted).

If your acceptance tests against the live admin API pass for create/get/update/delete, this should be safe to ship.

Also applies to: 1064-1094, 1119-1145

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
internal/provider/data_source_openai_project.go (1)

89-92: Remove debug print statements from production code.

These fmt.Printf debug statements will output to stdout in production. Use the tflog package for proper Terraform provider logging that respects log level settings.

-	fmt.Printf("Getting project with ID: %s\n", projectID)
-	fmt.Printf("Using URL: %s\n", strings.Replace(url, client.APIURL, "", 1))
-	fmt.Printf("OpenAI client config: API URL=%s, Organization ID=%s\n", client.APIURL, client.OrganizationID)
-	fmt.Printf("Making API request: GET %s\n", url)
+	tflog.Debug(ctx, "Getting project", map[string]interface{}{
+		"project_id": projectID,
+		"url":        strings.Replace(url, client.APIURL, "", 1),
+	})
internal/client/client.go (1)

2457-2461: Corrupted model name key in defaultRateLimits map.

The key "ft:\t-0613" appears to be corrupted - the \t is a tab character. This should likely be "ft:gpt-4-0613" or a similar model name.

-	"ft:	-0613": {
+	"ft:gpt-4-0613": {
 		MaxRequestsPer1Minute:   10000,
 		MaxTokensPer1Minute:     300000,
 		Batch1DayMaxInputTokens: 30000000,
 	},
🧹 Nitpick comments (4)
internal/provider/data_source_openai_project.go (1)

16-22: Consider using int64 for timestamp field.

The CreatedAt field is typed as int which may cause issues on 32-bit systems or for timestamps beyond 2038. Unix timestamps should typically use int64 for future-proofing.

 type ProjectResponse struct {
 	ID        string `json:"id"`
 	Object    string `json:"object"`
 	Name      string `json:"name"`
-	CreatedAt int    `json:"created_at"`
+	CreatedAt int64  `json:"created_at"`
 	Status    string `json:"status"`
 }
internal/client/client.go (1)

815-858: Excessive debug logging in production code.

The doRequest function contains extensive debug logging with fmt.Printf statements that will output to stdout in production. Consider using a structured logging library with configurable log levels, or remove these debug statements for production code.

This extensive debug logging significantly increases noise and may expose sensitive information. Consider:

  1. Using environment variable-controlled debug mode
  2. Removing most of these statements for production
  3. Using a proper logging library with log levels
internal/provider/resource_openai_rate_limit.go (2)

587-601: Set errors are silently ignored.

The setRateLimitState helper function uses blank identifier _ to ignore errors from d.Set() calls. While these rarely fail, silently ignoring them can mask issues. Consider logging or accumulating errors.

 func setRateLimitState(d *schema.ResourceData, rateLimit *client.RateLimit) {
 	// Set all values from the rate limit object to state
-	_ = d.Set("model", rateLimit.Model)
-	_ = d.Set("max_requests_per_minute", rateLimit.MaxRequestsPer1Minute)
-	_ = d.Set("max_tokens_per_minute", rateLimit.MaxTokensPer1Minute)
-	_ = d.Set("max_images_per_minute", rateLimit.MaxImagesPer1Minute)
-	_ = d.Set("batch_1_day_max_input_tokens", rateLimit.Batch1DayMaxInputTokens)
-	_ = d.Set("max_audio_megabytes_per_1_minute", rateLimit.MaxAudioMegabytesPer1Minute)
-	_ = d.Set("max_requests_per_1_day", rateLimit.MaxRequestsPer1Day)
+	d.Set("model", rateLimit.Model)
+	d.Set("max_requests_per_minute", rateLimit.MaxRequestsPer1Minute)
+	d.Set("max_tokens_per_minute", rateLimit.MaxTokensPer1Minute)
+	d.Set("max_images_per_minute", rateLimit.MaxImagesPer1Minute)
+	d.Set("batch_1_day_max_input_tokens", rateLimit.Batch1DayMaxInputTokens)
+	d.Set("max_audio_megabytes_per_1_minute", rateLimit.MaxAudioMegabytesPer1Minute)
+	d.Set("max_requests_per_1_day", rateLimit.MaxRequestsPer1Day)
 
 	// Keep the current ID
 	if rateLimit.ID != "" {
 		d.SetId(rateLimit.ID)
 	}
 }

604-611: Remove unused function getAPIKeyType.

This function is not called anywhere in the codebase and can be safely removed to reduce dead code.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 46ff61c and 88391ae.

📒 Files selected for processing (9)
  • docs/data-sources/project.md (2 hunks)
  • docs/resources/rate_limit.md (1 hunks)
  • examples/resources/openai_rate_limit/resource.tf (1 hunks)
  • internal/client/client.go (10 hunks)
  • internal/provider/data_source_openai_project.go (3 hunks)
  • internal/provider/resource_openai_project_test.go (3 hunks)
  • internal/provider/resource_openai_rate_limit.go (2 hunks)
  • modules/projects/README.md (1 hunks)
  • modules/projects/outputs.tf (0 hunks)
💤 Files with no reviewable changes (1)
  • modules/projects/outputs.tf
✅ Files skipped from review due to trivial changes (1)
  • docs/resources/rate_limit.md
🧰 Additional context used
🧬 Code graph analysis (2)
internal/provider/resource_openai_rate_limit.go (2)
internal/client/client.go (1)
  • Error (192-196)
internal/provider/provider.go (1)
  • GetOpenAIClientWithAdminKey (83-110)
internal/client/client.go (1)
internal/provider/provider.go (1)
  • OpenAIClient (27-31)
🔇 Additional comments (9)
modules/projects/README.md (1)

89-89: LGTM!

The addition of organization_id as an optional input with an empty default value is appropriate for the module's public interface.

internal/provider/resource_openai_project_test.go (1)

111-116: LGTM!

The test helper function correctly reflects the simplified resource configuration with only the name attribute, consistent with the resource schema changes.

internal/provider/data_source_openai_project.go (1)

41-55: LGTM on field renames.

The schema correctly reflects the API field renames (name and created_at), with appropriate types and descriptions.

internal/client/client.go (2)

556-564: LGTM on User struct changes.

The simplified User struct with AddedAt field aligns with the API changes and removes the unnecessary intermediate type conversions.


1064-1072: LGTM on CreateProject signature change.

The function now correctly uses name instead of title, matching the reverted API format.

docs/data-sources/project.md (1)

31-32: LGTM!

Documentation correctly reflects the field renames (name and created_at) and provides accurate example usage.

Also applies to: 48-50

examples/resources/openai_rate_limit/resource.tf (1)

1-37: LGTM!

The examples comprehensively demonstrate different rate limit configurations including per-model limits, daily limits, image limits, and batch processing limits. The ignore_rate_limit_warning usage in the batch example is helpful for users understanding the rate limit lifecycle behavior.

internal/provider/resource_openai_rate_limit.go (2)

617-637: LGTM on simplified delete logic.

The delete function correctly handles the OpenAI constraint that rate limits cannot be truly deleted. The graceful handling of permission errors (clearing state instead of failing) is appropriate.


292-316: LGTM on read fallback logic.

The read path correctly attempts to fetch by model name first, then falls back to the rate limit ID, with appropriate error handling for not-found and permission scenarios.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
internal/provider/resource_openai_rate_limit.go (2)

298-306: State drift risk with permission error handling.

Returning a warning while preserving state on permission errors could lead to state drift if the actual API values differ from Terraform state. Users might make decisions based on stale data. However, this graceful degradation appears intentional and is consistent with similar handling throughout the resource.

Consider documenting this behavior in the resource description to set user expectations that with insufficient permissions, state may not reflect actual API values.


623-627: Behavioral asymmetry in permission error handling.

Delete returns nil and clears state on permission errors (line 626), while Create/Update operations return warnings and preserve state. This means a resource may remain in OpenAI after Terraform believes it was deleted. While this allows Terraform operations to proceed despite permission constraints, the inconsistency could confuse users.

Consider documenting this behavior or making the permission error handling more consistent across CRUD operations. If the current behavior is intentional, add a comment explaining why Delete differs from Create/Update.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 88391ae and 0e38b3d.

📒 Files selected for processing (3)
  • docs/resources/rate_limit.md (1 hunks)
  • examples/resources/openai_rate_limit/resource.tf (1 hunks)
  • internal/provider/resource_openai_rate_limit.go (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • examples/resources/openai_rate_limit/resource.tf
🧰 Additional context used
🧬 Code graph analysis (1)
internal/provider/resource_openai_rate_limit.go (2)
internal/client/client.go (1)
  • Error (192-196)
internal/provider/provider.go (1)
  • GetOpenAIClientWithAdminKey (83-110)
🔇 Additional comments (2)
docs/resources/rate_limit.md (1)

13-50: Comprehensive examples added.

The Example Usage section demonstrates various rate limit configurations effectively, covering language models, image generation, and batch processing scenarios with appropriate field usage.

internal/provider/resource_openai_rate_limit.go (1)

312-339: LGTM!

State management is correct with consistent error handling for all field assignments.

@Fodoj Fodoj merged commit eee9853 into main Dec 4, 2025
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants