-
Notifications
You must be signed in to change notification settings - Fork 2
[SILO-690] feat: new api clients and tests #9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Linked to Plane Work Item(s)
This comment was auto-generated by Plane |
|
Caution Review failedThe pull request is closed. WalkthroughAdds three top-level API clients (Initiatives, Stickies, Teamspaces) with models, sub-resources, and tests; extends Projects and Workspaces with feature endpoints; adds InitiativeState and feature models; updates BaseResource._delete to accept an optional JSON payload; updates README and increments project version; ignores Changes
Sequence DiagramsequenceDiagram
participant User as User Code
participant Client as PlaneClient
participant Resource as Initiatives / Teamspaces / Stickies
participant HTTP as BaseResource (_get/_post/_patch/_delete)
User->>Client: client.initiatives.list(workspace_slug)
Client->>Resource: Initiatives.list(workspace_slug)
Resource->>HTTP: _get("/workspaces/{slug}/initiatives", params)
HTTP-->>Resource: JSON response
Resource->>Resource: model_validate -> PaginatedInitiativeResponse
Resource-->>Client: PaginatedInitiativeResponse
Client-->>User: results[]
note right of HTTP: DELETE now accepts optional JSON body
User->>Client: client.initiatives.epics.remove(slug, id, [epic_ids])
Client->>Resource: InitiativeEpics.remove(...)
Resource->>HTTP: _delete("/workspaces/{slug}/initiatives/{id}/epics", data={"epic_ids": [...]})
HTTP-->>Resource: 204 / success
Resource-->>Client: None
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~45 minutes Areas for extra attention:
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (13)
tests/unit/test_workspaces.py (1)
29-39: Consider preserving original state in test.The test updates
initiativestoTruebut doesn't restore the original value. While this may not cause issues if the test environment is isolated, consider storing and restoring the original value to prevent potential test pollution:def test_update_features(self, client: PlaneClient, workspace_slug: str) -> None: """Test updating workspace features.""" # Get current features first features = client.workspaces.get_features(workspace_slug) + original_initiatives = features.initiatives # Update features features.initiatives = True updated = client.workspaces.update_features(workspace_slug, features) assert updated is not None assert hasattr(updated, "initiatives") assert updated.initiatives is True + + # Restore original state + features.initiatives = original_initiatives + client.workspaces.update_features(workspace_slug, features)plane/models/workspaces.py (1)
1-13: Consider documenting the "pi" field and verify all fields are always present.The
WorkspaceFeaturemodel is well-structured, but:
- The
pifield name is unclear - consider adding a docstring or comment explaining what "pi" represents (Planning Intervals? Program Increment?)- All fields are required (non-optional), so the API must always return complete feature data
If the
pifield represents a domain-specific concept, add documentation:class WorkspaceFeature(BaseModel): """Workspace feature model.""" model_config = ConfigDict(extra="allow", populate_by_name=True) project_grouping: bool initiatives: bool teams: bool customers: bool wiki: bool - pi: bool + pi: bool # Program Increment / Planning Interval feature flagAdditionally, if any fields might be absent in API responses, consider making them optional with defaults to prevent validation errors:
project_grouping: bool = False initiatives: bool = False # etc.tests/unit/test_projects.py (1)
99-109: Add assertion for the "epics" field.The test checks most feature fields but is missing a check for the
epicsfield, which is part of theProjectFeaturemodel (line 143 in plane/models/projects.py). For consistency withtest_update_features(lines 122-127), add the missing assertion.def test_get_features(self, client: PlaneClient, workspace_slug: str, project: Project) -> None: """Test getting project features.""" features = client.projects.get_features(workspace_slug, project.id) assert features is not None + assert hasattr(features, "epics") assert hasattr(features, "cycles") assert hasattr(features, "modules") assert hasattr(features, "views") assert hasattr(features, "pages") assert hasattr(features, "intakes") assert hasattr(features, "work_item_types")tests/unit/test_stickies.py (1)
36-59: Add return type annotation to the sticky fixture.The
stickyfixture is missing a return type annotation. For consistency with other fixtures in the codebase (e.g.,projectfixture in test_projects.py line 51), add the return type.@pytest.fixture def sticky( self, client: PlaneClient, workspace_slug: str, sticky_data: CreateSticky, - ): + ) -> Sticky: """Create a test sticky and yield it, then delete it.""" sticky = client.stickies.create(workspace_slug, sticky_data) yield stickyRegarding the static analysis hint about bare exception catching: The current approach is acceptable for test cleanup code. However, if you want to improve debugging capabilities, consider logging exceptions:
except Exception as e: print(f"Cleanup failed: {e}") # or use logging.warningtests/unit/test_teamspaces.py (3)
56-59: Consider logging exceptions during test cleanup.The bare
except Exception: passblocks silence all errors during cleanup, which could hide legitimate issues. While acceptable for test teardown, logging the exceptions would aid debugging.Consider this pattern for better observability:
try: client.teamspaces.delete(workspace_slug, teamspace.id) - except Exception: - pass + except Exception as e: + # Log cleanup failure but don't fail the test + print(f"Cleanup failed: {e}")Also applies to: 71-74, 131-134, 177-180
146-158: Verify the test has proper assertions on the remove operation.The
test_add_and_remove_membersmethod callsremovebut doesn't verify the removal succeeded. Consider adding an assertion to confirm the member is no longer in the teamspace.Add verification after removal:
# Remove member client.teamspaces.members.remove(workspace_slug, teamspace.id, [user_id]) + + # Verify removal + members_after = client.teamspaces.members.list(workspace_slug, teamspace.id) + member_ids_after = [m.id for m in members_after.results] + assert user_id not in member_ids_after, "Member should be removed from teamspace"
192-204: Verify the test has proper assertions on the remove operation.Similar to the members test,
test_add_and_remove_projectscallsremovebut doesn't verify the project was actually removed from the teamspace.Add verification after removal:
# Remove project client.teamspaces.projects.remove(workspace_slug, teamspace.id, [project.id]) + + # Verify removal + projects_after = client.teamspaces.projects.list(workspace_slug, teamspace.id) + project_ids_after = [p.id for p in projects_after.results] + assert project.id not in project_ids_after, "Project should be removed from teamspace"plane/models/teamspaces.py (1)
45-50: Consider adding count field to PaginatedTeamspaceResponse.Other paginated response models in the codebase (e.g.,
PaginatedProjectResponse,PaginatedUserLiteResponse) typically include bothresultsandcountfields. The tests intest_teamspaces.py(lines 19-20, 30, 102, 142-143, 188-189) expect acountattribute on the response.Add the count field for consistency:
class PaginatedTeamspaceResponse(BaseModel): """Paginated teamspace response model.""" model_config = ConfigDict(extra="allow", populate_by_name=True) results: list[Teamspace] + count: int | None = Noneplane/api/initiatives/base.py (1)
84-97: Consider returning a common paginated base type for initiatives
list()currently returnsPaginatedInitiativeResponse, which only declaresresultsand relies onextra="allow"for pagination metadata (count, etc.). For stronger typing and consistency with other resources (e.g., stickies usingPaginatedResponse), you might consider havingPaginatedInitiativeResponseinherit from the shared paginated base socount,next, etc. are part of the declared schema.plane/api/initiatives/labels.py (1)
107-139: Normalizelabel_idspayload to a list before JSON serialization
add_labels/remove_labelsacceptIterable[str]but passlabel_idsdirectly into the JSON payload. If a caller ever passes a non-list iterable (e.g. a generator), the default JSON encoder may fail.You can harden this by normalizing to a list:
- response = self._post( - f"{workspace_slug}/initiatives/{initiative_id}/labels", - {"label_ids": label_ids}, - ) + response = self._post( + f"{workspace_slug}/initiatives/{initiative_id}/labels", + {"label_ids": list(label_ids)}, + )and similarly for
remove_labels.tests/unit/test_initiatives.py (1)
40-62: Avoid broadexcept Exception: passin test cleanupsThe cleanup blocks around initiative/label/project creation all use bare
except Exception: pass. While it keeps tests from failing on cleanup errors, it also hides genuine problems (e.g., consistent 4xx/5xx responses or auth issues), making debugging harder.Consider at least one of:
- Catching a narrower exception type (e.g., the SDK’s HTTP error) and letting unexpected exceptions bubble.
- Logging the exception instead of silently ignoring it, so failures are visible in CI logs.
Also applies to: 76-81, 103-121, 142-147, 162-167, 183-187, 211-225, 237-248, 281-292
plane/models/initiatives.py (2)
56-61: Consider reusing the shared paginated base for initiative/label responses
PaginatedInitiativeResponseandPaginatedInitiativeLabelResponseare plainBaseModels withresultsandextra="allow". Other parts of the SDK (e.g., stickies) use a sharedPaginatedResponsebase, which exposescount,next, etc. as first-class fields.For consistency and better typing, you could change these to:
-from pydantic import BaseModel, ConfigDict +from pydantic import BaseModel, ConfigDict +from .pagination import PaginatedResponse @@ -class PaginatedInitiativeResponse(BaseModel): +class PaginatedInitiativeResponse(PaginatedResponse): @@ -class PaginatedInitiativeLabelResponse(BaseModel): +class PaginatedInitiativeLabelResponse(PaginatedResponse):and keep
results: list[...]as-is. This would match how tests accessresponse.countand how other resources paginate.Also applies to: 102-107
11-25: Double-check thatlogo_propsis always present in backend responses
Initiative.logo_propsis a requireddictwithout a default. If the backend ever omits this field, Pydantic will raise a validation error when parsing an initiative.If the field is not strictly guaranteed by the API, consider making it optional:
- logo_props: dict + logo_props: dict | None = Noneso the client is resilient to missing
logo_props.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (30)
.gitignore(1 hunks)README.md(1 hunks)plane/__init__.py(2 hunks)plane/api/__init__.py(2 hunks)plane/api/base_resource.py(1 hunks)plane/api/initiatives/__init__.py(1 hunks)plane/api/initiatives/base.py(1 hunks)plane/api/initiatives/epics.py(1 hunks)plane/api/initiatives/labels.py(1 hunks)plane/api/initiatives/projects.py(1 hunks)plane/api/projects.py(2 hunks)plane/api/stickies.py(1 hunks)plane/api/teamspaces/__init__.py(1 hunks)plane/api/teamspaces/base.py(1 hunks)plane/api/teamspaces/members.py(1 hunks)plane/api/teamspaces/projects.py(1 hunks)plane/api/workspaces.py(2 hunks)plane/client/plane_client.py(2 hunks)plane/models/enums.py(1 hunks)plane/models/initiatives.py(1 hunks)plane/models/projects.py(1 hunks)plane/models/stickies.py(1 hunks)plane/models/teamspaces.py(1 hunks)plane/models/users.py(2 hunks)plane/models/workspaces.py(1 hunks)tests/unit/test_initiatives.py(1 hunks)tests/unit/test_projects.py(2 hunks)tests/unit/test_stickies.py(1 hunks)tests/unit/test_teamspaces.py(1 hunks)tests/unit/test_workspaces.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (22)
plane/api/teamspaces/__init__.py (1)
plane/api/teamspaces/base.py (1)
Teamspaces(15-95)
plane/client/plane_client.py (3)
plane/api/initiatives/base.py (1)
Initiatives(16-97)plane/api/stickies.py (1)
Stickies(8-82)plane/api/teamspaces/base.py (1)
Teamspaces(15-95)
plane/__init__.py (4)
plane/api/initiatives/base.py (1)
Initiatives(16-97)plane/api/stickies.py (1)
Stickies(8-82)plane/api/teamspaces/base.py (1)
Teamspaces(15-95)plane/api/workspaces.py (1)
Workspaces(8-40)
tests/unit/test_projects.py (3)
tests/unit/test_workspaces.py (3)
test_get_members(9-16)test_get_features(18-27)test_update_features(29-39)plane/client/plane_client.py (1)
PlaneClient(22-61)plane/api/projects.py (3)
get_members(86-97)get_features(99-107)update_features(109-122)
plane/api/base_resource.py (5)
plane/api/initiatives/base.py (1)
delete(75-82)plane/api/initiatives/labels.py (1)
delete(67-74)plane/api/projects.py (1)
delete(54-61)plane/api/stickies.py (1)
delete(60-67)plane/api/teamspaces/base.py (1)
delete(73-80)
plane/api/teamspaces/base.py (4)
plane/models/teamspaces.py (4)
CreateTeamspace(23-31)PaginatedTeamspaceResponse(45-50)Teamspace(4-20)UpdateTeamspace(34-42)plane/api/base_resource.py (5)
BaseResource(12-98)_post(39-44)_get(32-37)_patch(53-58)_delete(60-65)plane/api/teamspaces/members.py (1)
TeamspaceMembers(8-62)plane/api/teamspaces/projects.py (1)
TeamspaceProjects(8-62)
plane/api/stickies.py (2)
plane/models/stickies.py (4)
CreateSticky(27-39)PaginatedStickyResponse(57-62)Sticky(6-24)UpdateSticky(42-54)plane/api/base_resource.py (5)
BaseResource(12-98)_post(39-44)_get(32-37)_patch(53-58)_delete(60-65)
plane/api/initiatives/epics.py (2)
plane/api/base_resource.py (4)
BaseResource(12-98)_get(32-37)_post(39-44)_delete(60-65)plane/api/initiatives/projects.py (2)
add(32-49)remove(51-62)
tests/unit/test_teamspaces.py (6)
plane/client/plane_client.py (1)
PlaneClient(22-61)plane/models/projects.py (1)
Project(9-57)plane/models/teamspaces.py (2)
CreateTeamspace(23-31)UpdateTeamspace(34-42)plane/api/teamspaces/base.py (4)
create(25-39)delete(73-80)retrieve(41-52)update(54-71)plane/api/teamspaces/members.py (2)
add(32-49)remove(51-62)plane/api/teamspaces/projects.py (2)
add(32-49)remove(51-62)
tests/unit/test_workspaces.py (3)
tests/unit/test_projects.py (2)
test_get_features(99-108)test_update_features(110-127)plane/client/plane_client.py (1)
PlaneClient(22-61)plane/api/workspaces.py (2)
get_features(23-30)update_features(32-40)
plane/api/initiatives/__init__.py (1)
plane/api/initiatives/base.py (1)
Initiatives(16-97)
plane/api/workspaces.py (2)
plane/models/workspaces.py (1)
WorkspaceFeature(3-13)plane/api/base_resource.py (2)
_get(32-37)_patch(53-58)
tests/unit/test_stickies.py (3)
plane/client/plane_client.py (1)
PlaneClient(22-61)plane/models/stickies.py (2)
CreateSticky(27-39)UpdateSticky(42-54)plane/api/stickies.py (4)
create(14-28)delete(60-67)retrieve(30-41)update(43-58)
plane/api/projects.py (3)
plane/models/projects.py (1)
ProjectFeature(138-149)plane/api/workspaces.py (2)
get_features(23-30)update_features(32-40)plane/api/base_resource.py (2)
_get(32-37)_patch(53-58)
plane/api/initiatives/projects.py (2)
plane/models/projects.py (2)
PaginatedProjectResponse(131-136)Project(9-57)plane/api/base_resource.py (4)
BaseResource(12-98)_get(32-37)_post(39-44)_delete(60-65)
plane/api/initiatives/base.py (5)
plane/models/initiatives.py (4)
CreateInitiative(28-39)Initiative(6-25)PaginatedInitiativeResponse(56-61)UpdateInitiative(42-53)plane/api/base_resource.py (5)
BaseResource(12-98)_post(39-44)_get(32-37)_patch(53-58)_delete(60-65)plane/api/initiatives/epics.py (1)
InitiativeEpics(8-60)plane/api/initiatives/labels.py (5)
InitiativeLabels(13-139)create(19-33)retrieve(35-46)update(48-65)delete(67-74)plane/api/initiatives/projects.py (1)
InitiativeProjects(8-62)
plane/api/__init__.py (3)
plane/api/initiatives/base.py (1)
Initiatives(16-97)plane/api/stickies.py (1)
Stickies(8-82)plane/api/teamspaces/base.py (1)
Teamspaces(15-95)
plane/api/initiatives/labels.py (2)
plane/models/initiatives.py (4)
CreateInitiativeLabel(80-88)InitiativeLabel(64-77)PaginatedInitiativeLabelResponse(102-107)UpdateInitiativeLabel(91-99)plane/api/base_resource.py (5)
BaseResource(12-98)_post(39-44)_get(32-37)_patch(53-58)_delete(60-65)
tests/unit/test_initiatives.py (7)
plane/client/plane_client.py (1)
PlaneClient(22-61)plane/models/initiatives.py (4)
CreateInitiative(28-39)CreateInitiativeLabel(80-88)UpdateInitiative(42-53)UpdateInitiativeLabel(91-99)plane/models/projects.py (1)
Project(9-57)plane/api/initiatives/base.py (4)
create(27-41)delete(75-82)retrieve(43-54)update(56-73)plane/api/initiatives/labels.py (7)
create(19-33)delete(67-74)retrieve(35-46)update(48-65)list_labels(91-105)add_labels(107-124)remove_labels(126-139)plane/api/initiatives/epics.py (2)
add(30-47)remove(49-60)plane/api/initiatives/projects.py (2)
add(32-49)remove(51-62)
plane/models/initiatives.py (1)
plane/models/enums.py (1)
InitiativeState(84-91)
plane/api/teamspaces/members.py (2)
plane/models/users.py (2)
PaginatedUserLiteResponse(21-26)UserLite(7-18)plane/api/base_resource.py (4)
BaseResource(12-98)_get(32-37)_post(39-44)_delete(60-65)
plane/api/teamspaces/projects.py (4)
plane/models/projects.py (2)
PaginatedProjectResponse(131-136)Project(9-57)plane/api/base_resource.py (4)
BaseResource(12-98)_get(32-37)_post(39-44)_delete(60-65)plane/api/initiatives/projects.py (1)
add(32-49)plane/api/teamspaces/members.py (1)
add(32-49)
🪛 Ruff (0.14.4)
tests/unit/test_teamspaces.py
58-59: try-except-pass detected, consider logging the exception
(S110)
58-58: Do not catch blind exception: Exception
(BLE001)
73-74: try-except-pass detected, consider logging the exception
(S110)
73-73: Do not catch blind exception: Exception
(BLE001)
133-134: try-except-pass detected, consider logging the exception
(S110)
133-133: Do not catch blind exception: Exception
(BLE001)
179-180: try-except-pass detected, consider logging the exception
(S110)
179-179: Do not catch blind exception: Exception
(BLE001)
tests/unit/test_stickies.py
57-58: try-except-pass detected, consider logging the exception
(S110)
57-57: Do not catch blind exception: Exception
(BLE001)
72-73: try-except-pass detected, consider logging the exception
(S110)
72-72: Do not catch blind exception: Exception
(BLE001)
tests/unit/test_initiatives.py
61-62: try-except-pass detected, consider logging the exception
(S110)
61-61: Do not catch blind exception: Exception
(BLE001)
79-80: try-except-pass detected, consider logging the exception
(S110)
79-79: Do not catch blind exception: Exception
(BLE001)
119-120: try-except-pass detected, consider logging the exception
(S110)
119-119: Do not catch blind exception: Exception
(BLE001)
145-146: try-except-pass detected, consider logging the exception
(S110)
145-145: Do not catch blind exception: Exception
(BLE001)
165-166: try-except-pass detected, consider logging the exception
(S110)
165-165: Do not catch blind exception: Exception
(BLE001)
186-187: try-except-pass detected, consider logging the exception
(S110)
186-186: Do not catch blind exception: Exception
(BLE001)
224-225: try-except-pass detected, consider logging the exception
(S110)
224-224: Do not catch blind exception: Exception
(BLE001)
247-248: try-except-pass detected, consider logging the exception
(S110)
247-247: Do not catch blind exception: Exception
(BLE001)
291-292: try-except-pass detected, consider logging the exception
(S110)
291-291: Do not catch blind exception: Exception
(BLE001)
🔇 Additional comments (34)
.gitignore (1)
49-49: ✓ Best practice addition.Adding
.envto.gitignoreprevents accidental commits of local environment configurations containing secrets or credentials—essential for SDK development and testing workflows.plane/api/teamspaces/__init__.py (1)
1-4: LGTM!The package initialization follows the established pattern for API resource modules, cleanly re-exporting the
Teamspacesclass from the base module.plane/__init__.py (2)
2-14: LGTM!The imports for the new API resources (Initiatives, Stickies, Teamspaces, Workspaces) are correctly structured and follow the existing import organization pattern.
37-44: LGTM!The public API surface correctly exports the new resources, maintaining alphabetical ordering within the
__all__list.tests/unit/test_workspaces.py (1)
18-27: LGTM!The test correctly verifies that workspace features can be retrieved and that the response includes all expected feature flags (initiatives, project_grouping, teams, customers, wiki, pi).
README.md (1)
237-239: LGTM!The documentation correctly lists the new client resources (teamspaces, stickies, initiatives) in the Client Structure section, accurately reflecting the expanded API surface.
plane/api/__init__.py (2)
3-5: LGTM!The imports for the new API resources are properly structured and follow the established module organization pattern.
14-16: LGTM!The
__all__list correctly exports the new resources, maintaining consistency with the module's public API surface.plane/models/users.py (2)
4-4: LGTM!The import of
PaginatedResponseis correctly added to support the new paginated user lite response model.
21-27: LGTM!The
PaginatedUserLiteResponseclass follows the established pattern for paginated response models, with appropriate configuration (extra="allow",populate_by_name=True) and a correctly typedresultsfield.plane/api/projects.py (3)
8-8: LGTM!The
ProjectFeatureimport is correctly added to support the new feature management methods.
99-107: LGTM!The
get_featuresmethod correctly retrieves project features using the established pattern: GET request to the features endpoint and validation into theProjectFeaturemodel.
109-122: LGTM!The
update_featuresmethod correctly updates project features following the established pattern: PATCH request withmodel_dump(exclude_none=True)serialization and response validation intoProjectFeature.plane/api/initiatives/__init__.py (1)
1-4: LGTM!The package initialization follows the established pattern, cleanly re-exporting the
Initiativesclass from the base module.plane/models/enums.py (1)
84-92: LGTM!The
InitiativeStateenum is well-defined and follows the same pattern as other enums in the file. The state transitions (DRAFT → PLANNED → ACTIVE → COMPLETED/CLOSED) represent a clear lifecycle for initiatives.plane/client/plane_client.py (2)
4-4: LGTM!The new imports for
Initiatives,Stickies, andTeamspacesare properly organized alphabetically.Also applies to: 11-12
59-61: LGTM!The initialization of the three new API clients follows the established pattern and is consistent with other resource initializations in the
PlaneClientclass.plane/api/workspaces.py (2)
4-4: LGTM!The import of
WorkspaceFeatureis correctly placed and necessary for the new feature management methods.
23-40: LGTM!The
get_featuresandupdate_featuresmethods are well-implemented and follow the established patterns in the codebase:
- Clear docstrings with parameter and return type documentation
- Proper use of
model_validateandmodel_dump(exclude_none=True)- Consistent endpoint construction
plane/models/projects.py (1)
138-149: LGTM! Verify API always returns all feature fields.The
ProjectFeaturemodel is well-defined with clear boolean flags for different project features. All fields are required (non-optional), which means the API must always return complete feature flag data.Ensure the backend API always returns all seven feature fields (epics, modules, cycles, views, pages, intakes, work_item_types) in responses to avoid validation errors. If any fields might be absent, consider making them optional with default values:
epics: bool = False modules: bool = False # etc.tests/unit/test_projects.py (1)
110-127: LGTM!The
test_update_featurestest properly verifies both the update operation and the presence of all feature fields in the response.tests/unit/test_stickies.py (2)
11-31: LGTM!The
TestStickiesAPIclass properly tests list operations with and without query parameters, following established patterns from other test files.
60-93: LGTM!The CRUD test methods (
test_create_sticky,test_retrieve_sticky,test_update_sticky) are well-structured and follow the established testing patterns. The cleanup logic intest_create_stickyappropriately handles potential exceptions.plane/api/base_resource.py (1)
60-65: Let me verify the bulk delete implementations exist and are being used intentionally:Now let me check the actual implementation to see how these bulk delete endpoints are structured:Based on my verification, the codebase already implements multiple bulk delete operations using the
_delete()method with data payloads across different endpoints (teamspaces, initiatives, labels, etc.), confirming that the backend API explicitly supports DELETE requests with payloads. The HTTP/1.1 specification (RFC 7231) explicitly permits a body in DELETE requests, and bulk delete endpoints with dedicated patterns accepting lists of resource identifiers in the request body are recognized best practice.The implementation is production-ready and follows established patterns already in use throughout the codebase.
plane/api/initiatives/projects.py (1)
1-63: LGTM! Clean implementation following established patterns.The
InitiativeProjectsAPI client is well-structured with proper type hints, model validation, and follows the consistent patterns established in similar sub-resources throughout the codebase. The implementation correctly uses the updated_deletemethod that supports payload data.plane/api/teamspaces/members.py (1)
1-63: LGTM! Consistent sub-resource implementation.The
TeamspaceMembersAPI client follows the established patterns for sub-resource management. The implementation is clean with proper type hints, model validation, and consistent method signatures.plane/api/stickies.py (1)
1-83: LGTM! Well-structured CRUD API client.The
StickiesAPI client implements a complete CRUD interface following the established patterns in the SDK. Proper use of Pydantic model serialization withmodel_dump(exclude_none=True)and response validation withmodel_validate. Docstrings are clear and comprehensive.plane/api/teamspaces/projects.py (1)
1-63: LGTM! Consistent implementation with other project sub-resources.The
TeamspaceProjectsAPI client mirrors the pattern used inInitiativeProjects, providing a consistent interface for managing project associations. The implementation is clean and follows established conventions.plane/api/initiatives/epics.py (1)
1-60: LGTM! Clean sub-resource implementation.The
InitiativeEpicsAPI client follows the consistent pattern established for other initiative sub-resources. The implementation correctly uses the updated_deletemethod with payload support and provides proper type hints and model validation.plane/api/teamspaces/base.py (1)
1-96: LGTM! Well-architected resource with sub-resources.The
TeamspacesAPI client provides a complete CRUD interface and properly initializes its sub-resources (projectsandmembers) in the constructor. The implementation follows the established patterns in the SDK with proper model serialization and validation. The sub-resource initialization pattern aligns with similar resources likeInitiatives.plane/models/teamspaces.py (1)
1-50: Well-structured Pydantic models.The teamspace models follow consistent patterns with proper separation between create, update, and response models. The use of
ConfigDict(extra="allow", populate_by_name=True)provides flexibility for API evolution.plane/api/initiatives/base.py (1)
16-25: Initiatives client wiring and sub-resources look consistentThe main Initiatives resource and its labels/projects/epics sub-resources are wired consistently with other API clients, and the base path construction matches
/workspaces/{workspace_slug}/initiatives.... No functional issues spotted here.plane/api/initiatives/labels.py (1)
13-89: Labels CRUD and listing mirror other resources cleanlyThe label CRUD and workspace/initiative listing endpoints are consistent with the Initiatives client and use
model_dump(...)/model_validate(...)correctly. Paths under/workspaces/{workspace_slug}/initiatives/...look right.plane/models/stickies.py (1)
6-62: Sticky models look coherent and align with pagination patternThe Sticky/CreateSticky/UpdateSticky schemas and
PaginatedStickyResponse(PaginatedResponse)match the expected usage: response model allows extra fields, request models ignore extra, and the description/HTML/binary variants are consistently typed. Nothing blocking here.
| class CreateInitiative(BaseModel): | ||
| """Create initiative model.""" | ||
|
|
||
| model_config = ConfigDict(extra="allow", populate_by_name=True) | ||
|
|
||
| name: str | ||
| description_html: str | None = None | ||
| start_date: str | None = None | ||
| end_date: str | None = None | ||
| logo_props: dict | None = None | ||
| state: InitiativeState | None = None | ||
| lead: str | None = None | ||
|
|
||
|
|
||
| class UpdateInitiative(BaseModel): | ||
| """Update initiative model.""" | ||
|
|
||
| model_config = ConfigDict(extra="allow", populate_by_name=True) | ||
|
|
||
| name: str | None = None | ||
| description_html: str | None = None | ||
| start_date: str | None = None | ||
| end_date: str | None = None | ||
| logo_props: dict | None = None | ||
| state: InitiativeState | None = None | ||
| lead: str | None = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Add description to create/update initiative models for consistency
Initiative declares description: str | None, and tests construct CreateInitiative(description=...) and UpdateInitiative(description=...). However, CreateInitiative/UpdateInitiative currently only declare description_html, relying on extra="allow" to accept description.
To align the schema with actual usage and the response model, consider:
class CreateInitiative(BaseModel):
@@
- name: str
- description_html: str | None = None
+ name: str
+ description: str | None = None
+ description_html: str | None = None
@@
class UpdateInitiative(BaseModel):
@@
- name: str | None = None
- description_html: str | None = None
+ name: str | None = None
+ description: str | None = None
+ description_html: str | None = NoneThis keeps runtime behavior the same but fixes the type-level mismatch with both the tests and the Initiative model.
I'll verify the claims in the review comment by examining the codebase. Let me search for the Initiative model definition, test usages, and any other relevant code.
---
🏁 Script executed:
```shell
#!/bin/bash
# Find and display the Initiative model definition
rg -n "class Initiative" plane/models/initiatives.py -A 20
Length of output: 1256
🏁 Script executed:
#!/bin/bash
# Search for test files that use CreateInitiative or UpdateInitiative with description parameter
rg -n "CreateInitiative\(.*description" --type py -C 2Length of output: 0
🏁 Script executed:
#!/bin/bash
# Search for test files that use CreateInitiative or UpdateInitiative with description parameter (UpdateInitiative variant)
rg -n "UpdateInitiative\(.*description" --type py -C 2Length of output: 509
🏁 Script executed:
#!/bin/bash
# Find all test files related to initiatives
fd "test.*initiative" --type f -iLength of output: 30
🏁 Script executed:
#!/bin/bash
# Search for any imports or usages of CreateInitiative in the codebase
rg -n "CreateInitiative\|UpdateInitiative" --type py -C 1 | head -100Length of output: 0
Let me get more complete context on test usage and the full test file:
---
🏁 Script executed:
```shell
#!/bin/bash
# View the complete test file for initiatives
cat tests/unit/test_initiatives.py
Length of output: 11091
🏁 Script executed:
#!/bin/bash
# Search more broadly for CreateInitiative construction in tests
rg -n "CreateInitiative" tests/unit/test_initiatives.py -A 3 -B 1Length of output: 2628
🏁 Script executed:
#!/bin/bash
# Check API endpoints or handlers that use CreateInitiative/UpdateInitiative
rg -rn "CreateInitiative\|UpdateInitiative" plane/ --type py --include="*.py" -A 2 -B 1 | grep -v "class Create\|class Update" | head -80Length of output: 441
Now let me verify if there are any API handlers or serialization logic that might be affected by adding the description field explicitly:
---
🏁 Script executed:
```shell
#!/bin/bash
# Search for initiative create/update handlers
rg -n "def create\|def update" plane/ --type py -B 2 -A 5 | grep -A 5 -B 2 -i "initiative"
Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check if there's any pydantic validation or field aliasing
rg -n "Field\|validator\|field_validator" plane/models/initiatives.py -B 1 -A 1Length of output: 0
Add description field to CreateInitiative and UpdateInitiative models
The Initiative response model declares description: str | None = None (line 13), and tests consistently construct CreateInitiative and UpdateInitiative with the description parameter. However, the input models (lines 28-53) only declare description_html, accepting description solely through extra="allow". This creates a type-level mismatch where type checkers would not recognize description as a valid field.
To align the schema with test usage and the response model while ensuring proper type safety, add the description field to both models:
class CreateInitiative(BaseModel):
"""Create initiative model."""
model_config = ConfigDict(extra="allow", populate_by_name=True)
name: str
+ description: str | None = None
description_html: str | None = None class UpdateInitiative(BaseModel):
"""Update initiative model."""
model_config = ConfigDict(extra="allow", populate_by_name=True)
name: str | None = None
+ description: str | None = None
description_html: str | None = None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| class CreateInitiative(BaseModel): | |
| """Create initiative model.""" | |
| model_config = ConfigDict(extra="allow", populate_by_name=True) | |
| name: str | |
| description_html: str | None = None | |
| start_date: str | None = None | |
| end_date: str | None = None | |
| logo_props: dict | None = None | |
| state: InitiativeState | None = None | |
| lead: str | None = None | |
| class UpdateInitiative(BaseModel): | |
| """Update initiative model.""" | |
| model_config = ConfigDict(extra="allow", populate_by_name=True) | |
| name: str | None = None | |
| description_html: str | None = None | |
| start_date: str | None = None | |
| end_date: str | None = None | |
| logo_props: dict | None = None | |
| state: InitiativeState | None = None | |
| lead: str | None = None | |
| class CreateInitiative(BaseModel): | |
| """Create initiative model.""" | |
| model_config = ConfigDict(extra="allow", populate_by_name=True) | |
| name: str | |
| description: str | None = None | |
| description_html: str | None = None | |
| start_date: str | None = None | |
| end_date: str | None = None | |
| logo_props: dict | None = None | |
| state: InitiativeState | None = None | |
| lead: str | None = None | |
| class UpdateInitiative(BaseModel): | |
| """Update initiative model.""" | |
| model_config = ConfigDict(extra="allow", populate_by_name=True) | |
| name: str | None = None | |
| description: str | None = None | |
| description_html: str | None = None | |
| start_date: str | None = None | |
| end_date: str | None = None | |
| logo_props: dict | None = None | |
| state: InitiativeState | None = None | |
| lead: str | None = None |
🤖 Prompt for AI Agents
In plane/models/initiatives.py around lines 28 to 53, the CreateInitiative and
UpdateInitiative input models are missing an explicit description: str | None =
None field (they only have description_html and rely on extra="allow"), causing
a type mismatch with the response model and tests; add description: str | None =
None to both CreateInitiative and UpdateInitiative declarations so type checkers
recognize the field and the schemas align with tests and the Initiative response
model.
| @pytest.fixture | ||
| def initiative_data(self) -> CreateInitiative: | ||
| """Create test initiative data.""" | ||
| timestamp = datetime.now().strftime("%Y%m%d_%H%M%S_%f") | ||
| return CreateInitiative( | ||
| name=f"Test Initiative {timestamp}", | ||
| description="Test initiative for smoke tests", | ||
| ) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Model field usage relies on pydantic extra="allow" behavior
In initiative_data and test_update_initiative, you pass description= into CreateInitiative/UpdateInitiative, but the corresponding models only declare description_html (see plane/models/initiatives.py). This works at runtime because extra="allow" accepts and forwards unknown fields, but type checkers and tooling will not see description as a valid argument/field.
Either:
- Add
description: str | None = Noneto the create/update models, or - Change the tests to use
description_html=if that’s the intended API.
Aligning models and usage will make the SDK easier to consume.
Also applies to: 93-97
I'll verify the model definitions and test code to confirm this issue.
---
🏁 Script executed:
```shell
#!/bin/bash
# Find and inspect the CreateInitiative and UpdateInitiative models
rg -n "class CreateInitiative|class UpdateInitiative" --type=py -A 10 plane/models/
Length of output: 2605
🏁 Script executed:
#!/bin/bash
# Check the test file at the specified lines
sed -n '40,48p' tests/unit/test_initiatives.py
sed -n '93,97p' tests/unit/test_initiatives.pyLength of output: 629
🏁 Script executed:
#!/bin/bash
# Search for description field usage in the models file
rg -n "description" plane/models/initiatives.py -B 2 -A 2Length of output: 994
🏁 Script executed:
#!/bin/bash
# Find the exact Pydantic config for these models
rg -n "class Config|extra\s*=" plane/models/initiatives.py -B 3 -A 3Length of output: 1723
Add description field to CreateInitiative and UpdateInitiative models
The issue is confirmed. Both CreateInitiative (line 28) and UpdateInitiative (line 42) declare only description_html, but the tests pass description= which works due to extra="allow" but bypasses type checking. The read Initiative model has description, and the Label models (CreateInitiativeLabel, UpdateInitiativeLabel) also have description, making this an API inconsistency.
Add description: str | None = None to both models in plane/models/initiatives.py to align the create/update models with the read model and fix type visibility.
🤖 Prompt for AI Agents
In plane/models/initiatives.py around lines 28 and 42, the CreateInitiative and
UpdateInitiative models only define description_html but tests and other models
use description; add a description: str | None = None field to both
CreateInitiative and UpdateInitiative so they match the read Initiative and the
label models, ensuring type visibility and API consistency; update
imports/typing if needed (e.g., from __future__ import annotations or typing for
Optional) and run tests to confirm.
| def test_list_teamspaces_with_params( | ||
| self, client: PlaneClient, workspace_slug: str | ||
| ) -> None: | ||
| """Test listing teamspaces with query parameters.""" | ||
| params = {"per_page": 5} | ||
| response = client.teamspaces.list(workspace_slug, params=params) | ||
| assert response is not None | ||
| assert hasattr(response, "results") | ||
| assert len(response.results) <= 5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicate test method name will cause test shadowing.
The method test_list_teamspaces_with_params is defined in both TestTeamspacesAPI (lines 23-31) and TestTeamspacesAPICRUD (lines 95-112). The second definition in TestTeamspacesAPICRUD will shadow the first, meaning the test in TestTeamspacesAPI will never execute.
Additionally, lines 104-111 access response.results[0] without verifying the list is non-empty, which could raise an IndexError if no teamspaces exist.
Apply this diff to rename the duplicate test and add a safety check:
- def test_list_teamspaces_with_params(
+ def test_list_teamspaces_with_params_detailed(
self, client: PlaneClient, workspace_slug: str
) -> None:
- """Test listing teamspaces with query parameters."""
+ """Test listing teamspaces with query parameters and validate detailed fields."""
params = {"per_page": 5}
response = client.teamspaces.list(workspace_slug, params=params)
assert response is not None
assert hasattr(response, "results")
assert len(response.results) <= 5
+ if not response.results:
+ pytest.skip("No teamspaces available to test detailed fields")
assert hasattr(response.results[0], "id")
assert hasattr(response.results[0], "name")
assert hasattr(response.results[0], "description_html")
assert hasattr(response.results[0], "description_stripped")
assert hasattr(response.results[0], "description_binary")
assert hasattr(response.results[0], "logo_props")
assert hasattr(response.results[0], "lead")
assert hasattr(response.results[0], "workspace")Also applies to: 95-112
🤖 Prompt for AI Agents
In tests/unit/test_teamspaces.py around lines 23-31 and 95-112, there is a
duplicate test method name (test_list_teamspaces_with_params) causing shadowing
and an unsafe access to response.results[0]; rename the earlier test (lines
23-31) to a unique name such as test_list_teamspaces_with_params_basic (or any
non-duplicating name) and in the later block (lines 95-112) add a safety check
before indexing results (e.g., assert that response.results is truthy or
len(response.results) > 0) so you only access response.results[0] when the list
is non-empty.
|
You have run out of free Bugbot PR reviews for this billing cycle. This will reset on December 20. To receive reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial. |
Description
Type of Change
Screenshots and Media (if applicable)
Test Scenarios
References
Summary by CodeRabbit
New Features
Documentation
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.