English | 日本語 | 中文 | 한국어 | Español | Português
A generics-based, type-safe SQL query builder for Go that eliminates runtime errors without requiring code generation. Built on top of database/sql, it provides compile-time type checking, transparent SQL output, and a fluent builder API.
| Challenge | Existing Solutions | gsql-dev/gsql |
|---|---|---|
| Type safety | GORM: runtime errors everywhere | Compile-time checks via Go generics |
| Code generation | sqlc / ent / sqlboiler: require codegen step | No code generation needed |
| SQL transparency | ORMs produce unexpected queries | Predictable SQL via Build() inspection |
| Zero-value problem | GORM silently drops zero-value updates | Explicit Set/Unset semantics |
| Library | Type-Safe | Code Generation Required | Zero-Value Safe | SQL Transparency |
|---|---|---|---|---|
| GORM | No (interface{} based) |
No | No (zero-values silently dropped) | No (implicit SQL generation) |
| squirrel / goqu | No (string-based) | No | — | Yes |
| sqlc | Yes | Yes | Yes | Yes |
| ent | Yes | Yes | Yes | No (implicit) |
| sqlboiler | Yes | Yes | Yes | Yes |
| bob | Partial (generated) | Yes | Yes | Yes |
| gsql-dev/gsql | Yes | No | Yes (Set/Unset) | Yes |
-
Compile-time safety over runtime convenience — Type mismatches are caught by the Go compiler, not by panics at 3 AM.
-
No code generation — Define tables in pure Go. No build steps, no generated files, no sync issues.
-
SQL transparency — Every query can be inspected via
Build(). No hidden behavior, no surprise queries. -
Explicit over implicit — No magic eager loading, no automatic JOINs, no implicit cascades. You write what you mean.
-
Reflection only at init —
NewTable()reads struct tags once at startup. After that the package is reflection-free; query building, INSERT/UPDATE values, and result scanning all run on plain generics or stdlib. -
Standard library compatibility — Built on
database/sql. Result execution and scanning usedb.QueryContext+rows.Scandirectly. Works with any driver. Compatible with*sql.DBand*sql.Tx.
gsql-dev/gsql uses reflection only once per table, at program startup — never on the query hot path:
| Phase | Reflection? | Details |
|---|---|---|
NewTable() initialization |
Yes (once at startup) | Reads db struct tags, injects table/column names into Col[T] fields |
Query building (Build()) |
No | Pure generics — no reflection during Build() |
Insert().Set(...) / Update().Set(...) |
No | Type-safe via Val/ValIf generic functions |
| Result scanning | No | Use database/sql's rows.Scan(...) directly — you control the pointers |
After every NewTable() returns, the package is fully reflection-free for the rest of the program's lifetime. Every query you build and execute runs on plain generics and database/sql.
We considered eliminating reflection entirely by requiring explicit column constructors:
// Hypothetical reflection-free alternative — NOT what we use
var Users = qb.Table[UserColumns]{
Cols: UserColumns{
ID: qb.NewCol[int64]("users", "id"),
Name: qb.NewCol[string]("users", "name"),
Email: qb.NewCol[string]("users", "email"),
},
}This was rejected because:
- The reflection cost is negligible.
NewTable()runs once per table at program startup — total cost is typically a few microseconds, paid before your first query. It does not show up in benchmarks. - The hot path is already reflection-free. What matters for performance is per-query overhead, and that's already zero.
- Tag syntax is more declarative. Column names live next to the field they describe, in a single block, with no risk of mismatch between table reference and column registration.
- Both syntaxes have the same correctness boundary. Whether you write
db:"id"orNewCol[int64]("users", "id"), the column-name string is unchecked by the Go compiler — only the database can validate it. Eliminating reflection does not eliminate string typos.
If your project has a hard policy against any reflect usage (e.g., for embedded targets or aggressive binary-size reduction), NewCol[T] is provided as a drop-in escape hatch that bypasses NewTable[C].
gsql-dev/gsql is structurally injection-safe for values and fail-fast for identifiers:
| Position in SQL | Source | Mechanism |
|---|---|---|
Values (WHERE col = ?, INSERT VALUES, UPDATE SET, IN (?, ?, ...), LIKE ?) |
User input (any) | Always passed as args to db.QueryContext — never interpolated into the SQL string. The driver binds them. |
| Identifiers (table & column names) | Source code only — NewTable[C]("users") literal or db:"name" struct tags |
Validated at init time against [A-Za-z_][A-Za-z0-9_]*. Anything else panics on NewTable / NewCol. |
The package has no API that accepts a user-supplied identifier — there is intentionally no OrderByName(string), no WhereRaw(string), no dynamic column selection by string. This is a deliberate design boundary: if you cannot type the column name into Go source code, gsql-dev/gsql refuses to query it.
In scope (the library defends against these):
- Injection through values (
Eq,Neq,Gt,Gte,Lt,Lte,In,Like,Val,ValIf) - Injection through identifiers passed to
NewCol/NewTable/dbtags (rejected at init) - Empty
IN ()producing invalid SQL (handled with1=0)
Out of scope (caller's responsibility):
- Calling
NewCol(userInput, ...)with user-controlled strings — don't do this. Identifiers must be source-code constants. - LIKE-pattern injection (
%,_in user input) — wildcards are intended LIKE syntax. Escape them yourself if you do not want wildcard semantics. - Driver- or DB-level vulnerabilities (out of
database/sql's control).
Please do not open a public GitHub issue for security reports. See SECURITY.md for the disclosure process.
gsql-dev/gsql intentionally has a narrow scope. Some items below are deliberate design decisions that we do not plan to add; others are simply not yet implemented.
- Magic eager loading. has_many relationships require the explicit two-query pattern shown in the JOIN section.
- Result-scanning helper. Use
database/sql'srows.Scan(&u.ID, &u.Name, ...)directly. We removedFetch[T]to keep the runtime reflection-free. - DB-schema-aware column-name validation. The Go compiler cannot verify that
db:"id"matches an actual database column. Only the database can — the same is true for any string-based query builder. - Migration tooling, connection pooling, retry logic. These are
database/sqldriver concerns. Use a dedicated migration tool (e.g., goose, dbmate).
- Subqueries (in WHERE / FROM / SELECT)
- CTEs (
WITH ... AS) - Window functions
- Raw SQL escape hatch for things the builder doesn't model
Aggregate functions (
COUNT,SUM,AVG,MAX,MIN,GROUP BY,HAVING) are also not yet implemented — but they are on the roadmap.
For unsupported features, fall back to plain db.QueryContext(ctx, "SELECT COUNT(*) FROM users WHERE ...", args...) until they ship.
Measured against major Go SQL libraries on Apple M1, Go 1.21+. Source: _benchmark/ (run make bench).
| Operation | gsql | bun | ent | bob | jet | gorm | squirrel | goqu |
|---|---|---|---|---|---|---|---|---|
| SelectSimple | 272 | 339 | 546 | 887 | 1044 | 1715 | 1525 | 1136 |
| SelectJoin | 647 | 399 | 1585 | 1922 | 1852 | 2225 | 2182 | 2173 |
| SelectComplex | 712 | 659 | 1244 | 2013 | 2125 | 2621 | 3682 | 2378 |
| InsertSingle | 422 | 547 | 690 | 939 | 987 | 3118 | 2353 | 1983 |
| InsertBulk (100 rows) | 12,674 | 20,732 | 25,125 | 48,564 | 43,090 | 90,273 | 74,145 | 94,893 |
| Update | 490 | 518 | 686 | 1483 | 1000 | 2563 | 2294 | 1522 |
| Delete | 318 | 306 | 455 | 1095 | 683 | 1403 | 1559 | 892 |
| Operation | gsql | bun | ent | gorm | squirrel |
|---|---|---|---|---|---|
| SelectSimple | 488 | 768 | 808 | 2873 | 1736 |
| SelectJoin | 872 | 992 | 1856 | 3401 | 2392 |
| InsertSingle | 648 | 1072 | 632 | 4146 | 2217 |
| InsertBulk | 35,568 | 23,616 | 31,691 | 84,775 | 83,540 |
| Update | 568 | 880 | 784 | 3658 | 2377 |
| Delete | 440 | 584 | 560 | 2624 | 1664 |
Takeaway: gsql trades blows with bun for the fastest query-building and lowest allocations among Go SQL builders, while running 3–10× faster than GORM and squirrel. (sqlx and sqlc are excluded from these tables — they emit static SQL strings rather than building queries at runtime, so their numbers are not comparable.)
- Go 1.21+
go get github.com/gsql-dev/gsqlDefine your table columns as a Go struct with Col[T] fields and db struct tags:
package main
import "github.com/gsql-dev/gsql"
type UserColumns struct {
ID qb.Col[int64] `db:"id"`
Name qb.Col[string] `db:"name"`
Email qb.Col[string] `db:"email"`
Age qb.Col[int] `db:"age"`
}
var Users = qb.NewTable[UserColumns]("users")// For reading rows from SELECT queries
type User struct {
ID int64
Name string
Email string
Age int
}u := Users.Cols
q := qb.Select(u.ID, u.Name, u.Age).
From(Users).
Where(u.Age.Gt(18)).
OrderBy(u.Age.Desc()).
Limit(10)
// Execute via standard database/sql
sqlStr, args := q.Build()
rows, err := db.QueryContext(ctx, sqlStr, args...)
defer rows.Close()
var users []User
for rows.Next() {
var u User
if err := rows.Scan(&u.ID, &u.Name, &u.Age); err != nil {
return err
}
users = append(users, u)
}gsql-dev/gsql is a query builder only — execution and scanning use the standard
database/sqlpackage. This keeps the library small, reflection-free at runtime, and fully compatible with any driver. For convenience, write your own typed fetch helpers per row type.
Table[C] holds a Cols field of type C, where C is a struct whose fields are Col[T] typed columns:
type Table[C any] struct {
Cols C
tableName string
}
type Col[T any] struct {
table string
column string
}Col[T] preserves the Go type of each column. Type-mismatched comparisons (e.g., comparing a Col[int] with a string) produce compile-time errors, not runtime panics.
All execution methods accept a Querier interface, satisfied by both *sql.DB and *sql.Tx:
type Querier interface {
QueryContext(ctx context.Context, query string, args ...any) (*sql.Rows, error)
ExecContext(ctx context.Context, query string, args ...any) (sql.Result, error)
}This means you can seamlessly use the same query code inside or outside transactions.
Example with *sql.Tx:
tx, err := db.BeginTx(ctx, nil)
if err != nil {
return err
}
defer tx.Rollback() // Rollback after Commit is a no-op
if err := qb.Update(Users).
Set(qb.Val(Users.Cols.Age, 31)).
Where(Users.Cols.ID.Eq(1)).
Exec(ctx, tx); err != nil {
return err
}
if err := qb.Insert(Users).
Set(
qb.Val(Users.Cols.Name, "Bob"),
qb.Val(Users.Cols.Email, "bob@example.com"),
qb.Val(Users.Cols.Age, 25),
).
Exec(ctx, tx); err != nil {
return err
}
return tx.Commit()Every query type exposes a Build() method that returns the SQL string and arguments. This makes queries fully transparent and easy to debug:
q := qb.Select(Users.Cols.ID, Users.Cols.Name).
From(Users).
Where(Users.Cols.Age.Gt(18)).
OrderBy(Users.Cols.Name.Asc()).
Limit(10)
sql, args := q.Build()
fmt.Println(sql) // SELECT id, name FROM users WHERE users.age > ? ORDER BY users.name ASC LIMIT 10
fmt.Println(args) // [18]No hidden JOINs. No extra SELECTs. No magic. What you build is what gets executed.
All condition methods are available on Col[T] and return a Condition interface:
| Method | SQL | Example |
|---|---|---|
Eq(v T) |
column = ? |
Users.Cols.ID.Eq(int64(1)) |
Neq(v T) |
column != ? |
Users.Cols.Name.Neq("admin") |
Gt(v T) |
column > ? |
Users.Cols.Age.Gt(18) |
Gte(v T) |
column >= ? |
Users.Cols.Age.Gte(21) |
Lt(v T) |
column < ? |
Users.Cols.Age.Lt(65) |
Lte(v T) |
column <= ? |
Users.Cols.Age.Lte(30) |
In(vs []T) |
column IN (?, ?, ...) |
Users.Cols.ID.In([]int64{1, 2, 3}) |
IsNull() |
column IS NULL |
Users.Cols.Email.IsNull() |
IsNotNull() |
column IS NOT NULL |
Users.Cols.Email.IsNotNull() |
Like(pattern string) |
column LIKE ? |
Users.Cols.Email.Like("%@example.com") |
EqCol(other Col[T]) |
col1 = col2 |
Users.Cols.ID.EqCol(Posts.Cols.UserID) |
All conditions are parameterized — values are passed as arguments, never interpolated into the SQL string. This prevents SQL injection.
When In() receives an empty slice, it generates 1=0 instead of invalid SQL. This safely returns zero results without a database error:
q := qb.Select(Users.Cols.ID).
From(Users).
Where(Users.Cols.ID.In([]int64{}))
sql, _ := q.Build()
// → "SELECT id FROM users WHERE 1=0"Users.Cols.Name.Asc() // → "users.name ASC"
Users.Cols.Age.Desc() // → "users.age DESC"
q.OrderBy(Users.Cols.Age.Desc(), Users.Cols.Name.Asc())// Basic SELECT
q := qb.Select(Users.Cols.ID, Users.Cols.Name).
From(Users)
// With WHERE conditions (multiple conditions are AND-joined)
q := qb.Select(Users.Cols.ID, Users.Cols.Name).
From(Users).
Where(
Users.Cols.Age.Gte(20),
Users.Cols.Email.Like("%@example.com"),
)
// With ORDER BY, LIMIT, and OFFSET (pagination)
q := qb.Select(Users.Cols.ID, Users.Cols.Name, Users.Cols.Age).
From(Users).
OrderBy(Users.Cols.Age.Desc()).
Limit(20).
Offset(40)
// Build SQL, then execute via database/sql
sqlStr, args := q.Build()
rows, err := db.QueryContext(ctx, sqlStr, args...)| Method | Description |
|---|---|
Select(cols ...Column) |
Start a SELECT query with the given columns |
.From(table TableRef) |
Specify the table to select from |
.Where(conds ...Condition) |
Add WHERE conditions (AND-joined) |
.InnerJoin(table, on) |
Add an INNER JOIN clause |
.LeftJoin(table, on) |
Add a LEFT JOIN clause |
.OrderBy(orders ...OrderExpr) |
Add ORDER BY expressions |
.Limit(n int) |
Set the LIMIT clause |
.Offset(n int) |
Set the OFFSET clause |
.Build() |
Generate SQL string and arguments |
Execute the query with
db.QueryContext(ctx, sqlStr, args...)and scan rows yourself withrows.Scan(...). The library does not include a result-scanning helper — this keeps it reflection-free at runtime.
type PostColumns struct {
ID qb.Col[int64] `db:"id"`
UserID qb.Col[int64] `db:"user_id"`
Title qb.Col[string] `db:"title"`
}
var Posts = qb.NewTable[PostColumns]("posts")
// INNER JOIN — ideal for belongs_to relationships
q := qb.Select(Users.Cols.Name, Posts.Cols.Title).
From(Posts).
InnerJoin(Users, Users.Cols.ID.EqCol(Posts.Cols.UserID))
type UserPost struct {
Name string
Title string
}
sqlStr, args := q.Build()
rowsIter, err := db.QueryContext(ctx, sqlStr, args...)
defer rowsIter.Close()
var rows []UserPost
for rowsIter.Next() {
var r UserPost
if err := rowsIter.Scan(&r.Name, &r.Title); err != nil { return err }
rows = append(rows, r)
}
// LEFT JOIN
q := qb.Select(Users.Cols.Name, Posts.Cols.Title).
From(Users).
LeftJoin(Posts, Users.Cols.ID.EqCol(Posts.Cols.UserID))gsql-dev/gsql intentionally does not provide magic for eager loading. For has_many relationships, use two explicit queries:
// Query 1: build users query and execute
q1 := qb.Select(Users.Cols.ID, Users.Cols.Name).From(Users)
sqlStr, args := q1.Build()
userRows, _ := db.QueryContext(ctx, sqlStr, args...)
// ... scan into []User and collect IDs ...
// Query 2: fetch related posts
q2 := qb.Select(Posts.Cols.UserID, Posts.Cols.Title).
From(Posts).
Where(Posts.Cols.UserID.In(userIDs))
sqlStr, args = q2.Build()
postRows, _ := db.QueryContext(ctx, sqlStr, args...)
// ... scan into []Post ...// Single row
err := qb.Insert(Users).
Set(
qb.Val(Users.Cols.Name, "Alice"),
qb.Val(Users.Cols.Email, "alice@example.com"),
qb.Val(Users.Cols.Age, 30),
).
Exec(ctx, db)
// Multiple rows (bulk insert) — chained Row(...) calls
err := qb.BulkInsert(Users).
Row(qb.Val(Users.Cols.Name, "Bob"), qb.Val(Users.Cols.Email, "bob@example.com"), qb.Val(Users.Cols.Age, 25)).
Row(qb.Val(Users.Cols.Name, "Charlie"), qb.Val(Users.Cols.Email, "charlie@example.com"), qb.Val(Users.Cols.Age, 35)).
Exec(ctx, db)
// Chunked bulk insert — splits into multiple INSERT statements
err := qb.BulkInsert(Users).
Row(/* ... */).
Row(/* ... many rows ... */).
ChunkSize(1000).
Exec(ctx, db)| Method | Description |
|---|---|
Insert[C](table) |
Start a single-row INSERT |
BulkInsert[C](table) |
Start a multi-row INSERT |
.Set(clauses ...SetClause) |
Add column-value pairs (use Val/ValIf) |
.Row(clauses ...SetClause) |
Add a row to bulk insert |
.ChunkSize(n int) |
Maximum rows per INSERT statement |
.OnConflict(cols ...Column) |
Specify conflict columns for upsert |
.DoUpdate(cols ...Column) |
Columns to update on conflict |
.DoNothing() |
Ignore conflicts |
.Build() / .BuildAll() |
Generate SQL (single / chunked) |
.Exec(ctx, db) |
Execute the INSERT (handles chunking automatically) |
When inserting large datasets, use ChunkSize() to stay within database placeholder limits:
MySQL / PostgreSQL placeholder limit: 65,535
Safe guideline: rows × columns < 50,000
Example: 5-column table → ChunkSize(10000) is safe
BuildAll() returns a slice of SQL/args pairs, one per chunk. Exec() executes them all sequentially.
// Update on conflict
err := qb.Insert(Users).
Set(
qb.Val(Users.Cols.Name, "Alice"),
qb.Val(Users.Cols.Email, "alice@example.com"),
qb.Val(Users.Cols.Age, 99),
).
OnConflict(Users.Cols.Email).
DoUpdate(Users.Cols.Name, Users.Cols.Age).
Exec(ctx, db)
// PostgreSQL: INSERT INTO users (...) VALUES (...) ON CONFLICT (email) DO UPDATE SET name = EXCLUDED.name, age = EXCLUDED.age
// MySQL: INSERT INTO users (...) VALUES (...) ON DUPLICATE KEY UPDATE name = VALUES(name), age = VALUES(age)
// Ignore on conflict
err := qb.Insert(Users).
Set(
qb.Val(Users.Cols.Name, "Alice"),
qb.Val(Users.Cols.Email, "alice@example.com"),
qb.Val(Users.Cols.Age, 99),
).
OnConflict(Users.Cols.Email).
DoNothing().
Exec(ctx, db)
// PostgreSQL: INSERT INTO users (...) VALUES (...) ON CONFLICT (email) DO NOTHING
// MySQL: INSERT IGNORE INTO users (...) VALUES (...)err := qb.Update(Users).
Set(qb.Val(Users.Cols.Name, "New Name")).
Set(qb.Val(Users.Cols.Age, 31)).
Where(Users.Cols.ID.Eq(int64(1))).
Exec(ctx, db)
// → UPDATE users SET name = ?, age = ? WHERE users.id = ?
// → args: ["New Name", 31, 1]Val is a generic function that creates a type-safe column-value pair — the value type must match the column type.
This is the recommended approach for partial updates. It solves Go's zero-value problem where you can't distinguish "set to zero" from "don't update":
optName := qb.Set("Alice Updated") // Will be included in SET
optEmail := qb.Set("new@example.com") // Will be included in SET
optAge := qb.Unset[int]() // Will NOT be included — age stays unchanged
err := qb.Update(Users).
Set(qb.ValIf(Users.Cols.Name, optName)).
Set(qb.ValIf(Users.Cols.Email, optEmail)).
Set(qb.ValIf(Users.Cols.Age, optAge)).
Where(Users.Cols.ID.Eq(int64(1))).
Exec(ctx, db)
// → UPDATE users SET name = ?, email = ? WHERE users.id = ?
// → args: ["Alice Updated", "new@example.com", 1]
// Note: age is NOT in the SET clause — it remains at its current valueOptional[T] is the wrapper used by ValIf. It explicitly distinguishes between "set this field" and "don't touch this field" — solving Go's fundamental zero-value ambiguity in partial updates.
qb.Set[T](v T) Optional[T] // Mark as "set to this value"
qb.Unset[T]() Optional[T] // Mark as "do not update"
o.IsSet() bool // Returns true if Set was used
o.Value() T // Returns the underlying value| Expression | IsSet() |
Included in UPDATE? |
|---|---|---|
qb.Set("hello") |
true |
Yes — SET name = "hello" |
qb.Set("") |
true |
Yes — SET name = "" (zero-value is intentional) |
qb.Set(0) |
true |
Yes — SET age = 0 (zero-value is intentional) |
qb.Unset[string]() |
false |
No — column is excluded from SET clause |
| Method | Description |
|---|---|
Update[C](table Table[C]) |
Start an UPDATE query for the given table |
.Set(clauses ...SetClause) |
Set columns using Val(col, value) pairs (type-safe) |
.Where(conds ...Condition) |
Add WHERE conditions (AND-joined) |
.Build() |
Generate SQL string and arguments |
.Exec(ctx, db) |
Execute the UPDATE |
// Delete with WHERE condition
err := qb.Delete(Users).
Where(Users.Cols.ID.Eq(int64(1))).
Exec(ctx, db)
// → DELETE FROM users WHERE users.id = ?
// Delete with multiple conditions
err := qb.Delete(Users).
Where(
Users.Cols.Age.Lt(18),
Users.Cols.Email.IsNull(),
).
Exec(ctx, db)| Method | Description |
|---|---|
Delete[C](table Table[C]) |
Start a DELETE query for the given table |
.Where(conds ...Condition) |
Add WHERE conditions (AND-joined) |
.Build() |
Generate SQL string and arguments |
.Exec(ctx, db) |
Execute the DELETE |
gsql-dev/gsql uses a Dialect interface to handle database-specific SQL generation:
type Dialect interface {
Placeholder(n int) string // "?" for MySQL, "$1" for PostgreSQL
BuildUpsert(q *InsertQuery) string // Database-specific upsert syntax
}| Dialect | Placeholder | Upsert Syntax |
|---|---|---|
dialect.MySQL |
? |
ON DUPLICATE KEY UPDATE / INSERT IGNORE |
dialect.Postgres |
$1, $2, ... |
ON CONFLICT (...) DO UPDATE SET / DO NOTHING |
PostgreSQL:
INSERT INTO users (name, email) VALUES ($1, $2)
ON CONFLICT (email) DO UPDATE SET name = EXCLUDED.nameMySQL:
INSERT INTO users (name, email) VALUES (?, ?)
ON DUPLICATE KEY UPDATE name = VALUES(name)Implement the Logger interface for simple query logging:
type Logger interface {
Log(ctx context.Context, query string, args []any, duration time.Duration)
}Use Hook functions for more advanced middleware (tracing, metrics, etc.):
type Hook func(ctx context.Context, query string, args []any, next func() error) errorHooks execute in a nested chain:
Hook1.before → Hook2.before → Hook3.before
→ SQL execution
Hook3.after → Hook2.after → Hook1.after
- Works with any
database/sqlcompatible driver - MySQL — fully supported with dialect
- PostgreSQL — fully supported with dialect
- SQLite — works with standard
database/sqlinterface - Transaction support —
*sql.Txsatisfies theQuerierinterface
gsql/
├── table.go # Table[C], Col[T], NewTable, NewCol, Column interface
├── condition.go # WHERE conditions (Eq, Gt, In, Like, etc.) and OrderExpr
├── select.go # SELECT query builder, Querier interface
├── insert.go # INSERT query builder with upsert and chunking
├── update.go # UPDATE query builder with Set (Val/ValIf)
├── delete.go # DELETE query builder
├── optional.go # Optional[T] type (Set/Unset)
├── dialect.go # Dialect interface
├── hook.go # Logger interface, Hook middleware, chainHooks
├── dialect/
│ ├── postgres.go # PostgreSQL dialect implementation
│ └── mysql.go # MySQL dialect implementation
├── _example/ # Integration tests against a real MySQL database
└── _benchmark/ # Performance benchmarks against other libraries
go test ./...cd _example
docker compose up -d
go test -v ./...
docker compose downcd _benchmark
make benchSee the project root for license information.