-
Notifications
You must be signed in to change notification settings - Fork 0
/
resolver.go
38 lines (34 loc) · 1.61 KB
/
resolver.go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
package graph
//go:generate go run github.com/99designs/gqlgen@latest generate
import (
"github.com/jackc/pgx/v5/pgxpool"
"github.com/lucagez/qron/sqlc"
)
// RIPARTIRE QUI!<--
// [✅] implement timeout for automatic clearing of job
// [✅] implement `start_at` e.g. every week starting from monday
// [✅] implement `aquired_at` e.g. useful for keeping track of jobs that failed silenlty
// [] how to add subscribe (fanout)?
// [] implement client interface (local / remote client)
// [] implement remote client (gqlgenc for autogenerated one)
// [] implement `tinyd` tiny daemon. that leverages remote client and replay messages through http (or docker container)
// [✅] flush remaining jobs after cancel (use separate context for flush and fetch)
// [✅] add test for flushing remaining in-flight job after canceling fetch
// [] implement idempotency for client
// [✅] implement `@asap` operator
// [✅] implement UNIQUE (col1, col2) so to have scoped <name>, <owner>
// [✅] implement UPSERT if <name>, <owner> collides -> implement idempotency in client? read + update
// [] add test cases for new mutations
// [] use test packages to make binary smaller
// [✅] benchmarks
// [✅] add batch create jobs. e.g. create 1000 jobs in one go
// [] create examples
// [] add partitioning to jobs table
// [] rename `FAILURE` to `FAILED`
// [] rename `TinyJob` to `Job`
// [] Job deduplication can use a window of time instead of absolute, this can also be solved client side as
// hashes can be created with e.g. a time bucket, by minute or by hour, etc..
type Resolver struct {
Queries *sqlc.Queries
DB *pgxpool.Pool
}