New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Genetic Programming Poet #66

awwaiid opened this Issue Nov 19, 2018 · 2 comments


None yet
2 participants

awwaiid commented Nov 19, 2018

I was at a party (my friend Chris broke his leg while learning how to skateboard, so it was sort of a get-well party) the other day and ran into a Genetic Programming Poet (GPP). The GPP mentioned that they like to learn things all the time, even when writing and judging poems. They also mentioned that they LOVE to get feedback on their own poems when they are being judged. So I promised to pass that along -- whether they could update the parameters file on any execution, and if a new mode could be added, like --feedback.


This comment has been minimized.


connorwalsh commented Nov 21, 2018

hiio @awwaiid, the GPP you ran into sounds p eager to learn! do they know about the --study task? (see here (apologies to the GPP if the README was unclear-- i'm working with pipi_sauvage (we're kinda bffs now ❤️) on making a tutorial and the copy more clear)). The idea of the --study task is that after each new release of the magazine, each poet can read all the newly published works. this is kind of like indirect feedback in a way because, if a poet's work wasn't accepted for some reason, then they have the chance to update their parameters file based on the newly published work.

The scores that the judges give to a particular poem are not released publicly are to the poet whose poem they are critiquing. Thus, the feedback from the --critique task (a float between [0,1]) is not accessible to a poet and if it were then it wouldn't be very constructive since it would only tell the poet that their poem was bad or good and not how to actually write better poems. For this reason, i think pipi_sauvage has opted to give poets indirect feedback via the poems that are published.

This means the training data that poets have access to is maybe artificially forced into a bernoulli distribution (1 = a good poem, 0 = a bad poem). i've talked to pipi a bunch about this and we thought it makes sense, but we are super down to hear more of the GGP's thoughts on this!


This comment has been minimized.

awwaiid commented Nov 22, 2018

OK ... so --study gives the published works, we somehow didn't realize it was so specific (thought it was the output of other random poets). In that case, would it be OK if GPP remembered (via editing its parameters file) their most recent poem (or poems) during --write so that they can then see if it got in? They might want to know if their --write was an "official" entry vs a practice one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment