-
Notifications
You must be signed in to change notification settings - Fork 1
v0.2.0 release #16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v0.2.0 release #16
Conversation
0c1edb4 to
448f4a0
Compare
3a1aac5 to
a959942
Compare
d14260c to
7182d70
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR prepares the v0.2.0 release by introducing new SQL migrations, improving the install workflow, and updating CI/release pipelines and documentation.
- Adds v0.2.0 migration scripts defining new utility functions, parallel-safe aggregates, and operators.
- Enhances
install.shwith pg_config validation and customizable release version fetching. - Updates documentation and GitHub workflows (CI, migration tests, release) to support v0.2.0.
Reviewed Changes
Copilot reviewed 8 out of 9 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| sql/typeid--0.2.0.sql | New migration script registering v0.2.0 functions & operators |
| sql/typeid--0.1.0--0.2.0.sql | Upgrade path script detailing v0.1.0→v0.2.0 steps |
| install.sh | Added pg_config existence check and customizable release tag logic |
| Readme.md | Revised installation instructions and function/operator docs |
| Cargo.toml | Removed pgrx metadata block (needs version bump/metadata update) |
| .github/workflows/test-migrations.yaml | New job to verify SQL migrations before upgrading |
| .github/workflows/release.yaml | Overhauled release workflow: build/publish/docker/release |
| .github/workflows/ci.yaml | Updated CI matrix, Postgres setups, and added next branch |
Comments suppressed due to low confidence (5)
.github/workflows/ci.yaml:54
- The CI workflow references matrix.pg_version but the matrix key is named pg; update the variable to
${{ matrix.pg }}for the macOS installation step.
brew install postgresql@${{ matrix.pg_version }}
Readme.md:57
- The install script URL points to the GitHub HTML page rather than the raw script; update it to use the raw.githubusercontent.com URL for curl piped execution.
curl -sSL https://github.com/blitss/typeid-postgres/blob/main/install.sh | sudo bash
Readme.md:105
- The documentation states this function returns
SETOF typeid, but the SQL definition returns aTypeID[]; please correct the return type in the docs.
| `typeid_generate_batch(prefix TEXT, count INTEGER)` | `SETOF typeid` | `prefix TEXT`, `count INTEGER` | Generate a batch of TypeIDs. |
.github/workflows/test-migrations.yaml:74
- [nitpick] There's a TODO in the migration test highlighting future extraction of SQL tests; consider extracting these commands into a dedicated SQL test file for maintainability.
# todo: should extract that to a test sql file
| FROM series; | ||
| ``` | ||
|
|
||
| Obviously it adds some overhead because of decoding/ encoding base52 (because the data is stored as UUID) so keep that in mind. But upon testing I don't think the performance implications are very noticable, inserting the 100k records took me around 800ms. |
Copilot
AI
Jun 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a typo: 'noticable' should be spelled 'noticeable'.
| Obviously it adds some overhead because of decoding/ encoding base52 (because the data is stored as UUID) so keep that in mind. But upon testing I don't think the performance implications are very noticable, inserting the 100k records took me around 800ms. | |
| Obviously it adds some overhead because of decoding/ encoding base52 (because the data is stored as UUID) so keep that in mind. But upon testing I don't think the performance implications are very noticeable, inserting the 100k records took me around 800ms. |
Closes #15