chihaya requires Go >= 1.21 and MariaDB >= 10.3.3
go get
go build -v -o .bin/ ./cmd/...
Example systemd unit file:
[Unit]
Description=chihaya
After=network.target mariadb.service
[Service]
WorkingDirectory=/opt/chihaya
ExecStart=/opt/chihaya/chihaya
RestartSec=30s
Restart=always
User=chihaya
[Install]
WantedBy=default.target
Alternatively, you can also build/use a docker container instead:
docker build . -t chihaya
docker run -d --restart=always --user 1001:1001 --network host --log-driver local -v ${PWD}:/app chihaya
Build process outputs several binary files. Each binary has its own flags, use
-h
or --help
for detailed help on how to use them.
chihaya
- this is tracker itselfcc
- utility for manipulation of cache databencode
- utility for encoding and decoding between JSON and Bencode
Chihaya does not support keep-alive or TLS and is designed to be used behind reverse proxy (such as nginx
) that can
provide all of these features.
Usage of compression (such as gzip
) is dicouraged as responses are usually quite small (especially when compact
is requested), resulting in unnecessary overhead for zero gain.
Configuration is done in config.json
, which you'll need to create with the following format:
{
"database": {
"dsn": "chihaya:@tcp(127.0.0.1:3306)/chihaya",
"deadlock_pause": 1,
"deadlock_retries": 5
},
"channels": {
"torrent": 5000,
"user": 5000,
"transfer_history": 5000,
"transfer_ips": 5000,
"snatch": 25
},
"intervals": {
"announce": 1800,
"min_announce": 900,
"peer_inactivity": 3900,
"announce_drift": 300,
"scrape": 900,
"database_reload": 45,
"database_serialize": 68,
"purge_inactive_peers": 120,
"flush": 3
},
"http": {
"addr": ":34000",
"admin_token": "",
"proxy_header": "",
"timeout": {
"read": 300,
"write": 500,
"idle": 30
}
},
"announce": {
"strict_port": false,
"numwant": 25,
"max_numwant": 50
},
"record": false,
"scrape": true,
"log_flushes": true
}
database
dsn
- data source name at which to find databasedeadlock_pause
- time in seconds to wait between retries on deadlock, ramps up linearly with each attempt from this valuedeadlock_retries
- how many times should we retry on deadlock
channels
- channel holds raw data for injection to SQL statement on flushtorrent
- maximum size of channel holding changes totorrents
tableuser
- maximum size of channel holding changes tousers_main
tabletransfer_history
- maximum size of channel holding changes totransfer_history
transfer_ips
- maximum size of channel holding changes totransfer_ips
snatch
: maximum size of channels holding snatches fortransfer_history
intervals
- all values are in secondsannounce
- default announceinterval
given to clientsmin_announce
- minimummin_interval
between announces that clients should respectpeer_inactivity
- time after which peer is considered dead, recommended to be(min_announce * 2) + (announce_drift * 2)
announce_drift
- maximum announce drift to incorporate in defaultinterval
sent to clientscrape
- default scrapeinterval
given to clientsdatabase_reload
- time between reloads of user and torrent data from databasedatabase_serialize
- time between database serializations to cachepurge_inactive_peers
- time between peers older thanpeer_inactivity
are flushed from database and memoryflush
- time between database flushes when channel is used in less than 50%
http
- HTTP server configurationaddr
- address on which we should listen for requestsadmin_token
- administrative token used inAuthorization
header to access advanced prometheus statisticsproxy_header
- header name to look for user's real IP address, for exampleX-Real-Ip
timeout
read
- timeout in milliseconds for reading requestwrite
- timeout in milliseconds for writing response (per write operation)idle
- how long (in seconds) to keep connection open for keep-alive requests
announce
strict_port
- if enabled then announces where client advertises port outside range1024-65535
will be failednumwant
- Default number of peers sent on announce if otherwise not explicitly specified by clientmax_numwant
- Maximum number of peers that tracker will send per single announce, even if client requests more
record
- enables or disables JSON recorder of announcesscrape
- enables or disables/scrape
endpoint which allows clients to get peers count without sending announcelog_flushes
- whether to log all database flushes performed
If record
is true, chihaya will save all successful announce events to a file under
events
directory. The files will have a format of events_YYYY-MM-DDTHH.csv
and are
split hourly for easier analysis.
Supported database scheme can be located in database/schema.sql
.
Example data from fixtures can be consulted for additional help.