Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better CLI options #1

Open
DEGoodmanWilson opened this issue May 31, 2018 · 2 comments

Comments

Projects
None yet
2 participants
@DEGoodmanWilson
Copy link
Owner

commented May 31, 2018

  • Options for mana curve
  • Options for card type distribution
  • Options for weighting various factors' importance (e.g., should we pay closer attention to creature power? Or ignore creatures entirely? Does the color distribution matter really?)
  • Options for deck format (Standard, Commander, etc.)
@cgapperi

This comment has been minimized.

Copy link

commented Jan 12, 2019

Tonight I was driving from Chicago to Iowa City. On my way, I was listening to a presentation on TensorFlow. Don’t ask me why my head went to using ML to optimize a deck. Probably because I am so tired of my sons laughing at “Papa’s decks”. The biggest question, I think, is how to create the training data, right? What constitutes a ‘good’ deck. I think in some regard you are onto something. We have to start somewhere. I like the concept of setting an expectation and then testing the Euclidean distance of the training decks. You have really hit on some of the complexities of the features to include in the training data, but I think the real meat is in the labels. How do we know if it is playable? I was brainstorming on this and came up with a few models, but recognize the simplicity of the models, really didn’t scratch the itch. So, what if we used training data to then build some decks and play those decks against each other in another neural node of the pipeline? Then, the ML would start to learn what is a good deck, no? Or maybe even playing the new decks against the decks I own that I have already established as playable?

I am completely new to ML programming, but would be interested in at least bouncing models off the wall with you.

@DEGoodmanWilson

This comment has been minimized.

Copy link
Owner Author

commented Jan 15, 2019

It's a huge open question, isn't it? Because I'm using a GA to evolve decks, we need to be able to evaluate millions of decks quickly. My most recent efforts are around doing two things:

  • Classifying cards into one or more categories based on their rules text
  • Looking at popular / winning decks from decklists online to find associations between card categories. I.e., discovering things like (to make up an example) winning decks that include flyers also include counter-spells. Then using those associations to evaluate candidate decks (Ah, this deck has flyers—but does it also have counter-spells? Yes? +10 points!)

This work is going on in a branch, but I don'T recall which one. TBH the repo is a bit of a mess, and I haven't messed with this in some months—maybe almost a year now. I'll take some time to consolidate my work, and see if I can't put together a reasonable set of introductory documents!

Thanks for your interest! Would love to have more folks on hand helping out 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.