Allow string-literal module export names#181
Conversation
- Parse string-literal named imports and exports in module syntax - Add coverage for JS, JSON, and YAML module export names - Update docs to describe quoted export identifiers
📝 WalkthroughWalkthroughAdded parser support and runtime loader handling for string-literal module export names (e.g., Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
docs/language-restrictions.md (1)
178-205: Update the opening extension list in this section too.These new JSON/YAML examples make the earlier “Supported file extensions” sentence in the Modules section read stale, because it still sounds like only
.js/.jsx/.ts/.tsx/.mjsmodules are supported. Please update that prose so the section does not contradict itself.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/language-restrictions.md` around lines 178 - 205, The Modules section's opening sentence listing supported file extensions is now out of date; update that prose to include JSON and YAML module extensions (add `.json`, `.yaml`, `.yml`) and briefly note that JSON/YAML imports expose top-level keys as named exports (consistent with examples like import { "foo-bar" as fooBar } from "./config.json" and import { "0" as firstDoc } from "./multi-doc-index.yaml"); ensure the updated sentence remains concise and references the single-top-level-document constraint for `.yaml` / `.yml` imports so the section no longer contradicts the later examples and YAML explanation.tests/language/modules/string-literal-named-import-export.js (1)
22-46: Add error-case coverage for the new grammar restrictions.These tests only exercise successful imports/re-exports. The parser changes also add rejection paths like
import { "foo-bar" } ...and non-re-exportexport { "foo-bar" }, so adding one or two syntax-error assertions here would make regressions much easier to catch and would pair well with matching native parser tests.Based on learnings: "When modifying AST logic, scope chain, evaluator, or value types, update the native Pascal tests in
units/*.Test.pas"; as per coding guidelines, JavaScript tests should "cover happy paths, edge cases, and error cases".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/language/modules/string-literal-named-import-export.js` around lines 22 - 46, Add negative tests to the "string-literal named imports and exports" suite that assert the parser rejects string-literal specifiers; specifically add assertions that code like import { "foo-bar" } from "./mod.js" and a non-re-export export like export { "foo-bar" } produce syntax errors. Place these alongside the existing tests (in the same describe block) and use the same test harness/assertion style as the other cases (e.g., expect a thrown syntax error) so regressions in the new grammar restrictions are caught.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@units/Goccia.Parser.pas`:
- Around line 2432-2451: The error currently reports the statement-level
coordinates instead of the offending specifier; capture and store the local-name
token coordinates when you record LocalNameTokenTypes (e.g. add arrays like
LocalNameLines/LocalNameColumns or a LocalNameTokens array and set them where
LocalNameTokenTypes[SpecifierCount] := NameToken.TokenType is assigned), then
when raising TGocciaSyntaxError for LocalNameTokenTypes[I] = gttString use those
stored coordinates (instead of Line/Column) so the diagnostic points at the
actual token.
---
Nitpick comments:
In `@docs/language-restrictions.md`:
- Around line 178-205: The Modules section's opening sentence listing supported
file extensions is now out of date; update that prose to include JSON and YAML
module extensions (add `.json`, `.yaml`, `.yml`) and briefly note that JSON/YAML
imports expose top-level keys as named exports (consistent with examples like
import { "foo-bar" as fooBar } from "./config.json" and import { "0" as firstDoc
} from "./multi-doc-index.yaml"); ensure the updated sentence remains concise
and references the single-top-level-document constraint for `.yaml` / `.yml`
imports so the section no longer contradicts the later examples and YAML
explanation.
In `@tests/language/modules/string-literal-named-import-export.js`:
- Around line 22-46: Add negative tests to the "string-literal named imports and
exports" suite that assert the parser rejects string-literal specifiers;
specifically add assertions that code like import { "foo-bar" } from "./mod.js"
and a non-re-export export like export { "foo-bar" } produce syntax errors.
Place these alongside the existing tests (in the same describe block) and use
the same test harness/assertion style as the other cases (e.g., expect a thrown
syntax error) so regressions in the new grammar restrictions are caught.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4b73e91a-4699-4ac2-852f-07a4cc754820
📒 Files selected for processing (8)
docs/language-restrictions.mddocs/tutorial.mdtests/language/modules/helpers/string-literal-export-source.jstests/language/modules/helpers/string-literal-keys.jsontests/language/modules/helpers/string-literal-keys.yamltests/language/modules/helpers/string-literal-re-exporter.jstests/language/modules/string-literal-named-import-export.jsunits/Goccia.Parser.pas
Benchmark Results274 benchmarks Interpreted: 🟢 67 improved · 🔴 82 regressed · 125 unchanged · avg -0.1% arraybuffer.js — Interp: 🟢 1, 🔴 6, 7 unch. · avg -0.9% · Bytecode: 🔴 14 · avg -9.9%
arrays.js — Interp: 🔴 17, 2 unch. · avg -2.2% · Bytecode: 🔴 19 · avg -13.6%
async-await.js — Interp: 🟢 1, 🔴 3, 2 unch. · avg -1.0% · Bytecode: 🔴 6 · avg -9.2%
classes.js — Interp: 🟢 2, 🔴 12, 17 unch. · avg -0.8% · Bytecode: 🟢 1, 🔴 21, 9 unch. · avg -7.6%
closures.js — Interp: 🟢 1, 🔴 1, 9 unch. · avg -0.2% · Bytecode: 🔴 11 · avg -8.9%
collections.js — Interp: 🟢 6, 🔴 2, 4 unch. · avg +1.1% · Bytecode: 🟢 2, 🔴 9, 1 unch. · avg -7.3%
destructuring.js — Interp: 🟢 8, 🔴 6, 8 unch. · avg +0.5% · Bytecode: 🔴 21, 1 unch. · avg -9.7%
fibonacci.js — Interp: 🟢 1, 🔴 3, 4 unch. · avg -0.3% · Bytecode: 🔴 8 · avg -12.8%
for-of.js — Interp: 🟢 6, 1 unch. · avg +2.6% · Bytecode: 🔴 6, 1 unch. · avg -3.9%
helpers/bench-module.js — Interp: 0 · Bytecode: 0
iterators.js — Interp: 🟢 10, 🔴 1, 9 unch. · avg +1.3% · Bytecode: 🔴 20 · avg -9.8%
json.js — Interp: 🟢 2, 🔴 5, 13 unch. · avg -0.2% · Bytecode: 🔴 20 · avg -9.8%
jsx.jsx — Interp: 🟢 4, 🔴 5, 12 unch. · avg +0.0% · Bytecode: 🔴 21 · avg -6.1%
modules.js — Interp: 🔴 7, 2 unch. · avg -3.5% · Bytecode: 🔴 8, 1 unch. · avg -15.9%
numbers.js — Interp: 🟢 5, 🔴 3, 3 unch. · avg +0.8% · Bytecode: 🔴 11 · avg -11.5%
objects.js — Interp: 7 unch. · avg +0.5% · Bytecode: 🔴 7 · avg -6.0%
promises.js — Interp: 🟢 4, 🔴 2, 6 unch. · avg +0.5% · Bytecode: 🔴 12 · avg -7.7%
regexp.js — Interp: 🟢 5, 🔴 1, 5 unch. · avg +1.0% · Bytecode: 🔴 11 · avg -6.4%
strings.js — Interp: 🔴 7, 4 unch. · avg -2.0% · Bytecode: 🟢 1, 🔴 10 · avg -5.2%
typed-arrays.js — Interp: 🟢 11, 🔴 1, 10 unch. · avg +1.1% · Bytecode: 🔴 21, 1 unch. · avg -9.9%
Measured on ubuntu-latest x64. Benchmark ranges compare cached main-branch min/max ops/sec with the PR run; overlapping ranges are treated as unchanged noise. Percentage deltas are secondary context. |
Suite Timing
Measured on ubuntu-latest x64. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@units/Goccia.Parser.pas`:
- Around line 443-447: TGocciaParser.ConsumeModuleExportName currently only
accepts gttIdentifier and gttString, so reserved-word identifier-names like
"if", "class", "from" are rejected; change the function to accept IdentifierName
forms as well by updating the Match call to include the token kind used for
identifier-name tokens (e.g., gttIdentifierName) or by adding a runtime check
(e.g., TokenIsIdentifierName(Current) / checking Current.Kind in [gttIdentifier,
gttIdentifierName, gttString]) before returning Previous; reference
TGocciaParser.ConsumeModuleExportName, Match, gttIdentifier, gttString, and
Previous when making this change so ImportDeclaration/ExportDeclaration accept
reserved-word IdentifierName forms per the spec.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1ff37f23-e33d-40c5-b198-8020d38c8042
📒 Files selected for processing (9)
README.mddocs/design-decisions.mddocs/language-restrictions.mddocs/tutorial.mdtests/language/modules/helpers/multi-document-re-exporter.jstests/language/modules/helpers/multi-document-stream.yamltests/language/modules/yaml-multi-document-streams.jsunits/Goccia.Modules.Loader.pasunits/Goccia.Parser.pas
✅ Files skipped from review due to trivial changes (4)
- tests/language/modules/helpers/multi-document-stream.yaml
- tests/language/modules/helpers/multi-document-re-exporter.js
- README.md
- docs/tutorial.md
🚧 Files skipped from review as they are similar to previous changes (1)
- docs/language-restrictions.md
There was a problem hiding this comment.
🧹 Nitpick comments (1)
units/Goccia.Parser.pas (1)
2350-2370: Consider extracting specifier metadata into a record type.The parallel arrays (
LocalNames,ExportedNames,LocalNameTokenTypes,LocalNameLines,LocalNameColumns) work correctly but could be consolidated into a single dynamic array of records for improved maintainability and reduced risk of index misalignment:♻️ Optional refactor using a record type
+type + TExportSpecifierInfo = record + LocalName: string; + ExportedName: string; + LocalNameTokenType: TGocciaTokenType; + LocalNameLine: Integer; + LocalNameColumn: Integer; + end; + var ExportsTable: TStringStringMap; - LocalNames: array of string; - ExportedNames: array of string; - LocalNameTokenTypes: array of TGocciaTokenType; - LocalNameLines: array of Integer; - LocalNameColumns: array of Integer; + Specifiers: array of TExportSpecifierInfo; SpecifierCount: Integer;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@units/Goccia.Parser.pas` around lines 2350 - 2370, The parser currently uses parallel dynamic arrays (LocalNames, ExportedNames, LocalNameTokenTypes, LocalNameLines, LocalNameColumns) plus SpecifierCount which risks index misalignment; define a new record (e.g. TExportSpecifier with fields Name, ExportedName, TokenType, Line, Column, maybe IsReExport/NameToken) and replace the parallel arrays with a single dynamic array (e.g. Specifiers: array of TExportSpecifier), update SpecifierCount usages to Length(Specifiers) and change all code that reads/writes LocalNames/ExportedNames/LocalNameTokenTypes/LocalNameLines/LocalNameColumns/NameToken to use the corresponding fields on Specifiers[I], preserving semantics in routines like the named export parser and any iterations or increments, keeping HasFromClauseAfterNamedExports unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@units/Goccia.Parser.pas`:
- Around line 2350-2370: The parser currently uses parallel dynamic arrays
(LocalNames, ExportedNames, LocalNameTokenTypes, LocalNameLines,
LocalNameColumns) plus SpecifierCount which risks index misalignment; define a
new record (e.g. TExportSpecifier with fields Name, ExportedName, TokenType,
Line, Column, maybe IsReExport/NameToken) and replace the parallel arrays with a
single dynamic array (e.g. Specifiers: array of TExportSpecifier), update
SpecifierCount usages to Length(Specifiers) and change all code that
reads/writes
LocalNames/ExportedNames/LocalNameTokenTypes/LocalNameLines/LocalNameColumns/NameToken
to use the corresponding fields on Specifiers[I], preserving semantics in
routines like the named export parser and any iterations or increments, keeping
HasFromClauseAfterNamedExports unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0109c8df-03ee-4d24-ba40-65802914c008
📒 Files selected for processing (4)
tests/language/modules/helpers/identifier-name-export-source.jstests/language/modules/helpers/identifier-name-re-exporter.jstests/language/modules/identifier-name-module-export-names.jsunits/Goccia.Parser.pas
✅ Files skipped from review due to trivial changes (3)
- tests/language/modules/helpers/identifier-name-re-exporter.js
- tests/language/modules/helpers/identifier-name-export-source.js
- tests/language/modules/identifier-name-module-export-names.js
Summary
"foo-bar"and"0"from JavaScript, JSON, and YAML modules.Fixes #165
Testing
tests/language/modules/string-literal-named-import-export.jsto cover JavaScript, JSON, and YAML module cases.asfor string-literal import bindings.Summary by CodeRabbit
New Features
Documentation
Tests