Just what the world needs: another IRC bot
Racket Scheme Other
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
bugs
.gitignore
.mailmap
AUTHORS
GNUmakefile
README.censor-nasty-words
README.irc-servers
README.md
README.xlate.language-codes
TODO.md
analyze-quotes.rkt
clearenv.rkt
cutover.txt
el-buggo
elis-log-parsing-ideas.rkt
eval-trouble
freenode-main.rkt
get-big-log.sh
git-version.rkt
http.rkt
info.rkt Add appropriate pkg deps. Oct 21, 2015
irc-process-line.rkt
iserver.rkt
lexer.rkt
line-structure
lobster-skeleton.sql
loop.rkt
memory-notes.org
quotes
quotes.rkt
re.rkt
reloadable.rkt
rudybot.conf
sandboxes.rkt
search.rkt
servers.rkt
sighting-test.rkt
sketch.rkt "#lang scheme" => "#lang racket", since racket 5.1.1 on my EC2 was Jun 3, 2011
sounds
spelled-out-time.rkt
tinyurl.rkt
update-sightings.rkt
userinfo.rkt
utils.rkt
utterance.rkt
vars.rkt
xlate.rkt
zdate.rkt

README.md

If you're just trying the bot out, start it via racket freenode-main.rkt. If you want it to run continuously, and happen to have upstart available (which in practice means you're running Ubuntu), you can copy rudybot.conf to /etc/init and then # start rudybot.

Getting an error about rackunit not being available? That can be caused by using the racket-textual package instead of racket.

Run the unit tests like this:

$ raco test -x .

Run the integration tests by somehow creating corpus.db (alas I can't think of the steps at the moment), then

$ racket servers.rkt

Some specs:

Backups

I just started backing up the "big-log", which holds a more-or-less raw transcript of all communication between the bot and the IRC server. (The file "corpus.db" is a sqlite db holding essentially the same data; it can probably be recreated from the log via some simple hacking.)

I do the backups with an hourly cron job that runs this command:

aws s3 cp --quiet /mnt2/rudybot/big-log s3://rudybot-data-backups --region us-west-1

That's insanely inefficient, since it copies the same file over and over (the file doesn't grow all that much in an hour), but perhaps it's cheap enough. We'll see.