feat(zendesk): add Zendesk datasource package#288
Conversation
Surfaces Zendesk Tickets, Users, Organizations and ticket Comments as Forest Admin collections, mirroring how the @forestadmin/datasource-* packages work in the Node ecosystem. Highlights: * `forest_admin_datasource_zendesk` package under packages/, follows the same structure as forest_admin_datasource_active_record (Zeitwerk loader, gemspec, BaseCollection sharing sort/page/projection/PK-lookup). * Four collections: ZendeskTicket, ZendeskUser, ZendeskOrganization, ZendeskComment, with intra-datasource ManyToOne / OneToMany relations declared in the schema (Ticket -> requester/assignee/organization/ comments, Comment -> author/ticket, User -> organization/requested_tickets, Organization -> users/tickets). * Custom-field introspection at boot: queries /ticket_fields, /user_fields, /organization_fields and exposes admin-defined fields as real Forest columns. Filter translator rewrites custom column names to Zendesk Search syntax (`custom_field_<id>` for tickets; key for keyed user/org fields). * ConditionTreeTranslator produces Zendesk Search queries with caller-timezone-aware Date interpretation (Date is start-of-day in the caller's TZ, converted to UTC; DST-aware via ActiveSupport). * Loud errors on critical paths (search, count, fetch_ticket_comments) via wrapped APIError; best-effort logging + safe defaults on enrichment paths (bulk user/org lookups, schema introspection). * ZendeskComment uses a synthetic single-PK String (`<id>-<ticket_id>`) to side-step forest_admin_rails 1.26.2's URL constraint, which rejects the `|` character used by the toolkit's native pack_id for composite keys. * Standalone ZendeskComment list returns [] when no ticket scope is supplied (Zendesk has no /comments listing endpoint). Tests: 131 examples, 98.5% line / 90.7% branch coverage. WebMock blocks real network. Run with `bundle exec rspec` from the package directory. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
4 new issues
|
- Apply rubocop -A across the package (whitespace, indentation, hash
alignment, frozen string literals).
- Add packages/forest_admin_datasource_zendesk/.rubocop.yml inheriting
the repo root config and:
- relaxing Metrics/* limits for the wide-fact-table schema files
- exempting spec/** from RSpec stylistic cops we use heavily
(StubbedMock, MessageSpies, LeakyConstantDeclaration,
ConstantDefinitionInBlock) and from Layout/LineLength
- Collapse must_succeed's two identical rescue branches in client.rb.
- Move Branch constant out of the `private` scope in comment.rb.
Specs still pass: 131 examples, 0 failures, 98.7% line / 90.9% branch.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Resolves the 12 review comments left by qltysh on the previous push. Duplication - New `Collections::Searchable` mixin holds the list/find/aggregate flow that User and Organization both used. Each collection now only declares its zendesk_resource, sortable_fields, find_one and serialize, eliminating the ~16 lines of similar code qlty flagged. Complexity - BaseCollection now owns the (caller, filter, aggregation, _limit) contract method. Subclasses override `aggregate_count(caller, filter)` with a 2-arg helper. Ticket/User/Organization no longer carry their own copies of the same validation+raise+count boilerplate. - Ticket#embed_relations split into `embed_users` and `embed_organizations` (cyclomatic complexity 11 -> ~3 each). - Comment#list extracts `resolve_scope(filter)` and `fetch_comments` (complexity 9 -> 3). #extract_field_lookup pulled out `values_from_leaf` (complexity 5 -> 2). - ConditionTreeTranslator#format_value split into format_date and format_string (complexity 7 -> 3). Date / TZ degradation logic now lives inside format_date with a tighter rescue scope. - CustomFieldsIntrospector#introspect split into `usable_field?` and `build_entry` predicates (complexity 10 -> 3). - BaseCollection#translate_sort extracts `sort_field_and_direction` for the Sort::Clause vs hash branching. Parameter counts - Client#search collapsed five named kwargs into a single `**opts` hash + `build_search_params` helper. Callers (the Searchable mixin and Ticket#fetch_records) keep the same kwarg call sites. - aggregate's 4-param signature only remains on BaseCollection where it's mandated by ForestAdminDatasourceToolkit::Components::Contracts:: CollectionContract; subclasses no longer carry it. 131 specs still pass, 98.4% line / 91.4% branch coverage. RuboCop clean across the package. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
|
||
| # Toolkit contract — subclasses override `aggregate_count` instead of | ||
| # touching the 4-arg signature directly. | ||
| def aggregate(caller, filter, aggregation, _limit = nil) |
Bugs caught by macroscopeapp on the previous push: 1. (high) Searchable#search_records dropped filter.search from the query, while compose_count_query included it. Result: list and count returned different totals when the user typed in the search box. Both paths now share `compose_full_query`, which mirrors the logic the Ticket collection has used all along. 2. (high) sort_field_and_direction used `entry[:ascending] || entry['ascending']`, which silently flips a descending sort to ascending when both keys exist with different values, and returns nil instead of false when only the symbol key is set. Switched to `key?` so explicit `ascending: false` round-trips correctly. 3. (low) format_value(nil) emitted `field:` clauses, which Zendesk's search treats as a presence check — semantically wrong for an EQUAL/NOT_EQUAL filter. Now raises UnsupportedOperatorError with a message pointing the caller at PRESENT/BLANK. Plus qlty: Comment#resolve_scope dropped to complexity ~3 by extracting `decoded_synthetic_pairs`. Regression tests added for all three bugs. 135 specs, 0 failures, 98.6% line / 90.0% branch. RuboCop clean. Note: BaseCollection#aggregate's 4-param signature still trips qlty's `function-parameters` cop. That signature is structurally required by ForestAdminDatasourceToolkit::Components::Contracts::CollectionContract; subclasses already override the lower-arity `aggregate_count(caller, filter)` so the 4-arg form lives in exactly one place. No fix is possible without breaking the toolkit contract. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Multi-tenancy: ConditionTreeTranslator no longer keeps the custom-field mapping on class-level state. The mapping now lives on the Datasource instance (`Datasource#custom_field_mapping`), and is threaded through each translator call as `custom_fields:`. Two datasources with different mappings used to step on each other; they're now isolated. Translator robustness: - `format_string` now backslash-escapes internal double quotes when it needs to wrap in quotes. `subject = 'test "with" quotes'` previously emitted a malformed query. - `IN []` and `NOT_IN []` previously produced an empty string that the branch translator silently dropped, turning "match nothing" into "match everything". They now raise UnsupportedOperatorError. Comment#decoded_synthetic_pairs now drops pairs where either half is nil. Previously, `id = "abc-456"` (invalid comment_id, valid ticket_id) contributed `456` to the ticket scope and silently fetched every comment for that ticket — wrong, since the row the user clicked on doesn't exist. Regression tests added for all four bugs. 144 specs, 0 failures, 98.6% line / 90.4% branch. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
| def translate_leaf(leaf) | ||
| field = mapped_field(leaf.field) | ||
| value = leaf.value | ||
|
|
||
| return "requester:#{format_value(value)}" if leaf.field == 'requester_email' && leaf.operator == Operators::EQUAL | ||
|
|
There was a problem hiding this comment.
🟢 Low query/condition_tree_translator.rb:59
The special requester: mapping on line 63 only applies to the EQUAL operator. For NOT_EQUAL, IN, or NOT_IN on requester_email, the code falls through to the generic path and emits the unmappable field name requester_email instead of requester, producing malformed queries like -requester_email:value or requester_email:val1 requester_email:val2.
def translate_leaf(leaf)
field = mapped_field(leaf.field)
value = leaf.value
- return "requester:#{format_value(value)}" if leaf.field == 'requester_email' && leaf.operator == Operators::EQUAL
+ field = 'requester' if leaf.field == 'requester_email'
case leaf.operator🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file packages/forest_admin_datasource_zendesk/lib/forest_admin_datasource_zendesk/query/condition_tree_translator.rb around lines 59-64:
The special `requester:` mapping on line 63 only applies to the `EQUAL` operator. For `NOT_EQUAL`, `IN`, or `NOT_IN` on `requester_email`, the code falls through to the generic path and emits the unmappable field name `requester_email` instead of `requester`, producing malformed queries like `-requester_email:value` or `requester_email:val1 requester_email:val2`.
Evidence trail:
packages/forest_admin_datasource_zendesk/lib/forest_admin_datasource_zendesk/query/condition_tree_translator.rb at REVIEWED_COMMIT:
- Line 58: `field = mapped_field(leaf.field)`
- Line 63: `return "requester:#{format_value(value)}" if leaf.field == 'requester_email' && leaf.operator == Operators::EQUAL` - special case only for EQUAL
- Lines 65-68: generic case statement uses `field` variable for NOT_EQUAL, IN, NOT_IN
- Lines 87-89: `mapped_field` returns `@custom_fields[field] || field` - returns field name unchanged if not in custom_fields
build.yml:
- Add the package to the lint matrix (RuboCop runs against every
package's tree).
- Add to the test matrix; CI runs rspec under BUNDLE_GEMFILE=Gemfile-
test like the other datasource packages.
- Add the coverage.json path so qlty consumes our coverage too.
Gemfile-test: pulls forest_admin_datasource_toolkit from the local
sibling package (matches the active_record / mongoid pattern). Adds
simplecov-html, simplecov_json_formatter and webmock so the CI run
emits the JSON coverage file qlty expects.
spec_helper.rb: load simplecov-html / simplecov_json_formatter inside
a guarded require so local `bundle exec rspec` (no formatters) still
works while CI's Gemfile-test path emits coverage.json.
.releaserc.js:
- prepareCmd bumps the package's version.rb alongside the others.
- successCmd builds and pushes the gem to RubyGems.
- @semantic-release/git tracks the version.rb in the release commit.
Tested locally under both Gemfile and Gemfile-test: 144 specs pass,
RuboCop clean, JSON coverage emitted.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Wire write support through api.connection (POST/PUT/DELETE) with the existing must_succeed/APIError policy. Collections resolve write filters via list(..., ['id']) so any read-side condition also drives updates and deletes. build_payload strips read-only fields, renames ticket_type to Zendesk's `type`, lifts `description` into the initial comment on create only, and folds custom-field columns into custom_fields[] (tickets) or keyed user_fields / organization_fields (users / orgs). Schema flips user-editable columns to is_read_only: false (server-managed columns stay read-only); the custom-fields introspector now defaults discovered columns to writable. Comment remains read-only - Zendesk has no standalone write endpoint for it. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
| end | ||
| %w[id created_at updated_at].each { |k| base.delete(k) } | ||
| base['organization_fields'] = org_fields unless org_fields.empty? | ||
| base |
| end | ||
| %w[id created_at updated_at].each { |k| base.delete(k) } | ||
| base['user_fields'] = user_fields unless user_fields.empty? | ||
| base |
matthv
left a comment
There was a problem hiding this comment.
The comment density across this package feels much higher than the rest of the monorepo (compare with active_record DS which is nearly comment-free).
A lot just restate the code. The # Error policy: block at the top doesn't add anything must_succeed / best_effort don't already say.
There was a problem hiding this comment.
Looks like the Rakefile here isn't actually used (CI and release both bypass rake). Worth dropping?
| # Both list and count must build the query the same way: condition tree | ||
| # AND `filter.search`. A previous version of search_records omitted the | ||
| # search term, which made the count badge disagree with the rendered | ||
| # list ("100 results, count says 5") — guard against that by sharing | ||
| # this builder. |
There was a problem hiding this comment.
Since this is a new datasource, the "previous version" referenced here only ever existed during this PR's development. Once merged, the comment loses its anchor
| # `key?` (rather than `||`) for the boolean: `entry[:ascending] || ...` | ||
| # would silently flip a descending sort to ascending if both symbol and | ||
| # string keys exist with different values, and falls through to the | ||
| # other key whenever ascending is explicitly false. |
There was a problem hiding this comment.
The invariant is already covered by ticket_spec.rb:130 (preserves ascending=false even when both symbol and string keys are present). The test name says enough. This comment, and the duplicated # Regression: block in the test itself, can both be dropped.
|
|
||
| def fetch_records(filter, timezone) | ||
| ids = extract_id_lookup(filter.condition_tree) | ||
| return ids.filter_map { |id| datasource.client.find_ticket(id) } if ids |
There was a problem hiding this comment.
This line triggers one GET /tickets/:id call per id. With id IN [1..50], that's 50 sequential HTTP round-trips (multiple seconds of latency).
Zendesk has a /tickets/show_many?ids=… endpoint that returns up to 100 tickets in a single call. The bulk_show_many helper in client.rb already implements that pattern for users and organizations (see fetch_users_by_ids / fetch_organizations_by_ids). Worth applying the same approach to tickets to stay consistent and avoid the latency cliff.
There was a problem hiding this comment.
Reconsidering the Comment collection design
The log + return [] when list is called without a ticket scope (lines 19-27) is the visible symptom of a deeper layering issue. Walking through the current state:
Commentis registered as a top-level collection, so it appears in the Forest sidebar alongside Ticket / User / Organization.- When a user clicks it, Forest calls list(caller, filter, projection) with no ticket_id in the filter.
- The collection logs an info line nobody will ever read,
returns [], and the user sees an empty grid with no explanation. - To get a stable URL for an individual comment via Ticket > comments, the schema invents a synthetic <comment_id>-<ticket_id> String PK to dodge forest_admin_rails 1.26.2's URL constraint.
That's a lot of complexity (sidebar entry, scope resolution, synthetic PK, custom extract_field_lookup, Branch-walking logic) to support a use case Zendesk's API doesn't actually expose: standalone comment browsing. There's no /comments/{id} endpoint, comments are always fetched as /tickets/{id}/comments.
Why "just hide the sidebar entry" doesn't work
Checked the publication decorator: remove_collection cascades. published? in PublicationCollectionDecorator:79-85 will unpublish Ticket > comments as soon as Comment is removed. The toolkit doesn't currently expose a "hide from sidebar but keep accessible via relation" flag at the collection level.
Proposed simpler design: drop the Comment collection
Comments are read-only here (no create/update/delete/aggregate). The only valid access pattern is "give me the comments of this ticket." An embedded structured field on Ticket models that cleanly.
add_external_relation in the customizer layer does this, but it's the wrong layer for a datasource gem. We can do the same thing natively at the datasource layer because ColumnSchema#column_type accepts a structured [hash] shape and Ticket#list already controls projection-aware fetching.
# In Ticket#initialize:
COMMENT_THREAD_SCHEMA = {
'id' => 'Number',
'body' => 'String',
'html_body' => 'String',
'public' => 'Boolean',
'author_email' => 'String',
'author_name' => 'String',
'created_at' => 'Date'
}.freeze
add_field('comments', ColumnSchema.new(
column_type: [COMMENT_THREAD_SCHEMA],
filter_operators: []
))
# In Ticket#list:
def list(caller, filter, projection)
records = fetch_records(filter, timezone_for(caller))
rows = build_rows(records, projection, caller)
rows = embed_comments(records, rows) if want_comments?(projection)
rows
end
def want_comments?(projection)
projection.nil? || Array(projection).map(&:to_s).any? { |p| p == 'comments' || p.start_with?('comments:') }
end
The lazy gating mirrors the existing needs_requester_email? pattern, so list views (where projection is always explicit) don't trigger N+1.
What disappears
- comment.rb (the entire Comment collection)
- The synthetic <comment_id>-<ticket_id> PK and the forest_admin_rails 1.26.2 workaround comment
- extract_field_lookup, the Branch traversal, decoded_synthetic_pairs, decode_synthetic_id
- The OneToManySchema comments relation on Ticket (replaced by the structured field)
- The log + return [] block this comment thread is anchored on
- comment_spec.rb in its entirety, replaced by smaller projection-gated tests on ticket_spec.rb
Trade-off
author_id becomes a flat author_email / author_name denormalized in the embedded structure, instead of a clickable FK to ZendeskUser. For a v1 read-only thread view it's acceptable: users read the conversation on the ticket page and don't typically jump to the agent's profile from a comment. If that need surfaces, we add it in a follow-up by reintroducing the relation.
| # quotes per Zendesk's documented quoting rules; without this, a value | ||
| # like `test "with" quotes` would emit a malformed query. | ||
| def format_string(value) | ||
| return value unless value.match?(/[\s"]/) |
There was a problem hiding this comment.
The regex /[\s"]/ only triggers quoting on whitespace or double quotes. Zendesk Search has other characters that change query semantics: ( ) for grouping, : for field separators, - for negation. A value like (test gets emitted bare and Zendesk parses subject:(test as a grouping expression instead of a literal match.
| def compose_full_query(caller, filter) | ||
| translated = ForestAdminDatasourceZendesk::Query::ConditionTreeTranslator.call( | ||
| filter.condition_tree, timezone: timezone_for(caller), | ||
| custom_fields: datasource.custom_field_mapping | ||
| ) | ||
| [translated, filter.search].compact.reject(&:empty?).join(' ') | ||
| end |
There was a problem hiding this comment.
Same query builder lives in two places: compose_full_query(caller, filter) here and Ticket#build_query(filter, timezone) in ticket.rb:119. Bodies are identical apart from how they get the timezone (mixin reads timezone_for(caller) from BaseCollection, Ticket receives it as an arg).
Worth lifting into BaseCollection#build_zendesk_query(caller, filter) and having both Searchable and Ticket call it. Single source of truth for the translator + filter.search composition, and one less place for them to drift apart.
Sketch:
# base_collection.rb
def build_zendesk_query(caller, filter)
translated = ForestAdminDatasourceZendesk::Query::ConditionTreeTranslator.call(
filter.condition_tree,
timezone: timezone_for(caller),
custom_fields: datasource.custom_field_mapping
)
[translated, filter.search].compact.reject(&:empty?).join(' ')
end
# searchable.rb
def compose_full_query(caller, filter)
build_zendesk_query(caller, filter)
end
# ticket.rb (drop the local build_query, update its callers)
def aggregate_count(caller, filter)
datasource.client.count('ticket', query: build_zendesk_query(caller, filter))
end
def fetch_records(filter, caller) # signature update: pass caller, drop timezone
ids = extract_id_lookup(filter.condition_tree)
return ids.filter_map { |id| datasource.client.find_ticket(id) } if ids
query = build_zendesk_query(caller, filter)
...
end
Once factored, compose_full_query is just a thin alias that the mixin can keep for clarity, or Searchable can call build_zendesk_query directly and the alias goes away.
| def translate_page(page) | ||
| return [1, Client::MAX_PER_PAGE] if page.nil? | ||
|
|
||
| per_page = page.limit&.positive? ? [page.limit, Client::MAX_PER_PAGE].min : Client::MAX_PER_PAGE | ||
| page_num = (page.offset.to_i / per_page) + 1 | ||
| [page_num, per_page] | ||
| end |
There was a problem hiding this comment.
Zendesk Search caps total results at 1000 (in addition to the per-page max of 100, which is already handled via Client::MAX_PER_PAGE). The total cap isn't represented anywhere in the code: translate_page happily computes page_num = 11 past offset 1000, sends it to Zendesk, gets a 422 back, and the user sees a generic APIError with no actionable message.
Suggested fix: add a MAX_TOTAL_RESULTS = 1000 constant, clamp page_num to MAX_TOTAL_RESULTS / per_page, and raise a ForestException with a clear message when exceeded:
MAX_TOTAL_RESULTS = 1000
…lint cleanup)
Architecture
- Drop the standalone Comment collection (no /comments/{id} endpoint
in Zendesk; the synthetic <comment_id>-<ticket_id> PK and the empty
sidebar grid were complexity for a use case the API does not expose).
- Replace it with a structured `comments` array column on Ticket
(column_type: [COMMENT_THREAD_SCHEMA]), mirroring the customizer's
AddExternalRelation shape. Fetched lazily in Ticket#list only when
the projection asks for it (same pattern as needs_requester_email?).
Author email/name resolved through the existing fetch_users_by_ids
bulk endpoint. Schema/lazy-fetch helpers extracted to
CommentsEmbedder to keep the Ticket class under 100 lines.
- Tickets PK lookup now goes through a single
`fetch_tickets_by_ids` (POST /tickets/show_many) instead of
N sequential GET /tickets/{id} round-trips. Wrapped in
must_succeed since this is the primary read path, not enrichment.
Lint / project standards
- Delete the per-package .rubocop.yml that was loosening
ClassLength/MethodLength/LineLength/ParameterLists. Package now
respects the project-wide limits.
- Refactor to fit those limits: Client::Writes and
Client::Introspection sibling modules; Ticket::SchemaDefinition,
RelationEmbedder, Serializer, CommentsEmbedder; Comment helpers
(the ones still useful) inlined where used.
- Compact the three find_*/Client methods through a single
find_one(api_collection, id) helper.
- Extract build_zendesk_query(caller, filter) on BaseCollection;
Searchable#compose_full_query and Ticket#build_query were
identical and now both call it.
Security / consistency
- Flip rubygems_mfa_required to 'false' to match every other gemspec
in the monorepo, and add the gemspec to Gemspec/RequireMFA: Exclude.
MFA-on-publish is a monorepo-wide decision, not a one-off opt-in.
Bug fixes
- ConditionTreeTranslator#format_string now also quotes values
containing `(`, `)`, `:`, `-`. Previously a value like `(test`
was emitted bare and Zendesk parsed it as a grouping expression.
Comment cleanup
- Strip "this method does X" comments across the package; keep
comments that document non-obvious WHYs (regressions, external
constraints, surprising parser behavior).
- Remove the unused Client#raw method.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
| end | ||
| end | ||
|
|
||
| def put_resource(path, key, id, attributes) |
| # Forest column name -> Zendesk Search field name. Lives on the instance | ||
| # (not the translator class) so multiple Zendesk datasources in the same | ||
| # agent don't share state. | ||
| def build_custom_field_mapping(ticket_cf, user_cf, org_cf) |
There was a problem hiding this comment.
🟢 Low forest_admin_datasource_zendesk/datasource.rb:33
When a user or organization custom field has the same column_name as a ticket custom field, build_custom_field_mapping silently drops the user/org mapping. Line 35 unconditionally writes ticket mappings, then line 39 uses ||= for user/org fields — so collisions keep the ticket value and lose the user/org zendesk_key. This causes search queries on those user/org fields to use the wrong Zendesk search field name, returning incorrect results or no matches.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file packages/forest_admin_datasource_zendesk/lib/forest_admin_datasource_zendesk/datasource.rb around line 33:
When a user or organization custom field has the same `column_name` as a ticket custom field, `build_custom_field_mapping` silently drops the user/org mapping. Line 35 unconditionally writes ticket mappings, then line 39 uses `||=` for user/org fields — so collisions keep the ticket value and lose the user/org `zendesk_key`. This causes search queries on those user/org fields to use the wrong Zendesk search field name, returning incorrect results or no matches.
Evidence trail:
packages/forest_admin_datasource_zendesk/lib/forest_admin_datasource_zendesk/datasource.rb (lines 33-42) at REVIEWED_COMMIT: build_custom_field_mapping uses = for tickets and ||= for user/org fields.
packages/forest_admin_datasource_zendesk/lib/forest_admin_datasource_zendesk/schema/custom_fields_introspector.rb (lines 73-78) at REVIEWED_COMMIT: column_naming shows ticket column_name = 'custom_<id>', user/org column_name = raw['key'] || 'custom_<id>'.
packages/forest_admin_datasource_zendesk/lib/forest_admin_datasource_zendesk/collections/base_collection.rb:87 at REVIEWED_COMMIT: shared custom_field_mapping passed to ConditionTreeTranslator for all collection types.
packages/forest_admin_datasource_zendesk/lib/forest_admin_datasource_zendesk/query/condition_tree_translator.rb:84 at REVIEWED_COMMIT: mapped_field uses @custom_fields[field] || field to resolve field names.
| case node.operator | ||
| when Operators::EQUAL then [node.value] | ||
| when Operators::IN then Array(node.value) |
There was a problem hiding this comment.
🟢 Low collections/base_collection.rb:34
When node.operator is EQUAL and node.value is nil, extract_id_lookup returns [nil]. Callers using this to short-circuit to /resource/{id} lookups attempt to fetch /resource/ with a nil ID, causing API errors. Consider returning nil when node.value is nil to skip the short-circuit.
case node.operator
- when Operators::EQUAL then [node.value]
+ when Operators::EQUAL then node.value.nil? ? nil : [node.value]
when Operators::IN then Array(node.value)🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file packages/forest_admin_datasource_zendesk/lib/forest_admin_datasource_zendesk/collections/base_collection.rb around lines 34-36:
When `node.operator` is `EQUAL` and `node.value` is `nil`, `extract_id_lookup` returns `[nil]`. Callers using this to short-circuit to `/resource/{id}` lookups attempt to fetch `/resource/` with a nil ID, causing API errors. Consider returning `nil` when `node.value` is nil to skip the short-circuit.
Evidence trail:
base_collection.rb:34 — `when Operators::EQUAL then [node.value]` returns `[nil]` when value is nil. searchable.rb:7 — `ids_in_filter(filter) ? find_records_by_id(filter) : search_records(...)` treats `[nil]` as truthy. searchable.rb:24 — `find_records_by_id` calls `find_one(nil)`. ticket.rb:104-106 — calls `fetch_tickets_by_ids([nil])`. condition_tree_leaf.rb:15 — `def initialize(field, operator, value = nil)` accepts nil value with no validation preventing EQUAL+nil combination.
| spec.add_dependency 'activesupport', '>= 6.1' | ||
| spec.add_dependency 'zeitwerk', '~> 2.3' | ||
| spec.add_dependency 'zendesk_api', '~> 3.0' | ||
| end |
There was a problem hiding this comment.
🔴 Critical forest_admin_datasource_zendesk/forest_admin_datasource_zendesk.gemspec:32
The gemspec is missing a runtime dependency on forest_admin_datasource_toolkit, which the library requires at runtime via require 'forest_admin_datasource_toolkit' and inherits from heavily. When the gem is installed by an end user, forest_admin_datasource_toolkit will not be pulled in, causing a LoadError on startup. Add spec.add_dependency 'forest_admin_datasource_toolkit' to the gemspec.
spec.add_dependency 'activesupport', '>= 6.1'
spec.add_dependency 'zeitwerk', '~> 2.3'
spec.add_dependency 'zendesk_api', '~> 3.0'
+ spec.add_dependency 'forest_admin_datasource_toolkit'🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file packages/forest_admin_datasource_zendesk/forest_admin_datasource_zendesk.gemspec around lines 32-35:
The gemspec is missing a runtime dependency on `forest_admin_datasource_toolkit`, which the library requires at runtime via `require 'forest_admin_datasource_toolkit'` and inherits from heavily. When the gem is installed by an end user, `forest_admin_datasource_toolkit` will not be pulled in, causing a `LoadError` on startup. Add `spec.add_dependency 'forest_admin_datasource_toolkit'` to the gemspec.
| def format_string(value) | ||
| return value unless value.match?(/[\s"():-]/) | ||
|
|
||
| %("#{value.gsub('"', '\\"')}") | ||
| end |
There was a problem hiding this comment.
🟢 Low query/condition_tree_translator.rb:115
In format_string, backslashes are not escaped before escaping double quotes. A value like foo\"bar becomes "foo\\"bar" after wrapping in quotes, which contains an unescaped quote and produces a malformed Zendesk query. Escape backslashes first, then double quotes.
def format_string(value)
return value unless value.match?(/[\s"():-]/)
- %{"#{value.gsub('"', '\"')}"}
+ %{"#{value.gsub('\\', '\\\\\\\\').gsub('"', '\"')}"}
end🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file packages/forest_admin_datasource_zendesk/lib/forest_admin_datasource_zendesk/query/condition_tree_translator.rb around lines 115-119:
In `format_string`, backslashes are not escaped before escaping double quotes. A value like `foo\"bar` becomes `"foo\\"bar"` after wrapping in quotes, which contains an unescaped quote and produces a malformed Zendesk query. Escape backslashes first, then double quotes.
Evidence trail:
packages/forest_admin_datasource_zendesk/lib/forest_admin_datasource_zendesk/query/condition_tree_translator.rb lines 115-118 at REVIEWED_COMMIT: `format_string` method performs `value.gsub('"', '\\"')` without first escaping backslashes, producing malformed output when the input contains `\"`.
# [1.28.0](v1.27.2...v1.28.0) (2026-05-05) ### Features * **zendesk:** add Zendesk datasource package ([#288](#288)) ([fa7efd5](fa7efd5))
|
🎉 This PR is included in version 1.28.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
Summary
New
forest_admin_datasource_zendeskpackage that surfaces Zendesk Tickets, Users, Organizations, and ticket Comments as Forest Admin collections — same architectural pattern as the existingforest_admin_datasource_active_record/_mongoid/_rpcsiblings, mirroring the Node@forestadmin/datasource-*ecosystem.ticket_fields,user_fields,organization_fieldsat boot and exposes them as real Forest columns. Translator rewrites custom-column names to Zendesk Search syntax.Datefilters: Forest's caller timezone is threaded through the translator; bareDatevalues are interpreted as start-of-day in the caller's TZ then converted to UTC (DST-aware).search,count,fetch_ticket_comments) raiseAPIError; enrichment paths (bulk user/org lookups, schema introspection) log a warning and degrade safely.<comment_id>-<ticket_id>String) to side-stepforest_admin_rails 1.26.2's URL constraint, which rejects the|character used by the toolkit's native composite-keypack_id. Filed-this-as-a-bug-worth-fixing-upstream note in the relevant code comment.What's NOT in this PR (deliberate v1 scope)
create/update/deleteon any collection. The Compliance handoff workflow (update ticket status / reassign group) is the obvious next add.Rails.cachewrapping in a follow-up if rate limits become a concern.Test plan
cd packages/forest_admin_datasource_zendesk && bundle install && bundle exec rspec— 131 examples, 0 failures🤖 Generated with Claude Code
Note
Add Zendesk datasource package with collections for tickets, users, organizations, and comments
forest_admin_datasource_zendeskgem that wraps the Zendesk API as a Forest Admin datasource, supporting list, count, create, update, and delete for tickets, users, organizations, and comments.Clientwrapszendesk_apiwith consistent error handling: critical paths raiseAPIError, non-critical paths log and return safe defaults.ConditionTreeTranslatorconverts Forest condition trees to Zendesk Search query strings, with timezone-aware date handling and custom field mapping.CustomFieldsIntrospectorfetches Zendesk custom field definitions at boot and produces typedColumnSchemaentries for tickets, users, and organizations.Commentcollection is read-only and only returns results when aticket_idscope is present in the filter; it uses a synthetic composite<comment_id>-<ticket_id>primary key..releaserc.jsand CI in.github/workflows/build.ymlare extended to build, publish, and report coverage for the new gem.Changes since #288 opened
collections.Ticketto use four new mixin modules and changed comments from aOneToManyrelation to a read-only structured array column [22d782e]ForestAdminDatasourceZendesk::Query::ConditionTreeTranslator.format_stringto always quote and escape string filter values [22d782e]ForestAdminDatasourceZendesk::Clientto extract write and introspection operations into separate modules and added bulk ticket fetching [22d782e]collections.BaseCollection.build_zendesk_queryand refactored dependent methods to use it [22d782e]Commentcollection registration fromForestAdminDatasourceZendesk::Datasource[22d782e]collections.BaseCollection.projectto useto_hfor projection filtering [22d782e]ForestAdminDatasourceZendesk::Query::ConditionTreeTranslatorvalue quoting [22d782e]packages/forest_admin_datasource_zendesk/collections/ticket_spec.rbto reflect schema and behavior changes [22d782e]packages/forest_admin_datasource_zendesk/forest_admin_datasource_zendesk.gemspecto Gemspec/RequireMFA exclusions and set metadatarubygems_mfa_requiredto false [22d782e]ForestAdminDatasourceZendeskmodule with includedForestAdminDatasourceToolkit::Components::Queryand locally-rebound constants [ed28d88]expect(...).to receiveto post-invocationallow(...).to receivewithexpect(...).to have_receivedassertions [ed28d88]Collections::Ticketspec for client interaction verification [ed28d88]leaf_classandbranch_classhelpers inQuery::ConditionTreeTranslatorspec [ed28d88]packages/forest_admin_datasource_zendesk/spec/**/*toMetrics/ModuleLengthexclusion list in.rubocop.yml[ed28d88]ClientandSchema::CustomFieldsIntrospectorspecs [ed28d88]Macroscope summarized 4ac39d5.