Skip to content

v0.2.0

Pre-release
Pre-release
Compare
Choose a tag to compare
@joeholley joeholley released this 07 Dec 03:06

v0.2.0 (alpha)

This is a pretty large update. Custom MMFs or evaluators from 0.1.0 may need some tweaking to work with this version. Some Backend API function arguments have changed. Please join the Slack channel if you need help (Signup link)!

v0.2.0 focused on adding additional functionality to Backend API calls and on reducing the amount of boilerplate code required to make a custom Matchmaking Function. For this, a new internal API for use by MMFs called the Matchmaking Logic API (MMLogic API) has been added. Many of the core components and examples had to be updated to use the new Backend API arguments and the modules to support them, so we recommend you rebuild and redeploy all the components to use v0.2.0.

Release notes

  • MMLogic API is now available. Deploy it to kubernetes using the appropriate json file and check out the gRPC API specification to see how to use it. To write a client against this API, you'll need to compile the protobuf files to your language of choice. There is an associated cloudbuild.yaml file and Dockerfile for it in the root directory.
  • When using the MMLogic API to filter players into pools, it will attempt to report back the number of players that matched the filters and how long the filters took to query state storage.
  • An example MMF using it has been written in Python3. There is an associated cloudbuild.yaml file and Dockerfile for it in the root directory. By default the example backend client is now configured to use this MMF, so make sure you have it avaiable before you try to run the latest backend client.
  • An example MMF using it has been contributed by Ilya Hrankouski in PHP (thanks!). - The API specs have been split into separate files per API and the protobuf messages are in a separate file. Things were renamed slightly as a result, and you will need to update your API clients. The Frontend API hasn't had it's messages moved to the shared messages file yet, but this will happen in an upcoming version.
  • The example golang MMF has been updated to use the latest data schemas for MatchObjects, and renamed to manual-simple to denote that it is manually manipulating Redis, not using the MMLogic API.
  • The API specs have been split into separate files per API and the protobuf messages are in a separate file. Things were renamed slightly as a result, and you will need to update your API clients. The Frontend API hasn't had it's messages moved to the shared messages file yet, but this will happen in an upcoming version.
  • The message model for using the Backend API has changed slightly - for calls that make MatchObjects, the expectation is that you will provide a MatchObject with a few fields populated, and it will then be shuttled along through state storage to your MMF and back out again, with various processes 'filling in the blanks' of your MatchObject, which is then returned to your code calling the Backend API. Read thegRPC API specification for more information.
  • As part of this, compiled protobuf golang modules now live in the internal/pb directory. There's a handy bash script for compiling them from the api/protobuf-spec directory into this new internal/pb directory for development in your local golang environment if you need it.
  • As part of this Backend API message shift and the advent of the MMLogic API, 'player pools' and 'rosters' are now first-class data structures in MatchObjects for those who wish to use them. You can ignore them if you like, but if you want to use some of the MMLogic API calls to automate tasks for you - things like filtering a pool of players according attributes or adding all the players in your rosters to the ignorelist so other MMFs don't try to grab them - you'll need to put your data into the protobuf messages so Open Match knows how to read them. The sample backend client test profile JSONhas been updated to use this format if you want to see an example.
  • Rosters were formerly space-delimited lists of player IDs. They are now first-class repeated protobuf message fields in the Roster message format. That means that in most languages, you can access the roster as a list of players using your native language data structures (more info can be found in the guide for using protocol buffers in your langauge of choice). If you don't care about the new fields or the new functionality, you can just leave all the other fields but the player ID unset.
  • Open Match is transitioning to using protocol buffer messages as its internal data format. There is now a Redis state storage golang module for marshaling and unmarshaling MatchObject messages to and from Redis. It isn't very clean code right now but will get worked on for the next couple releases.
  • Ignorelists now exist, and have a Redis state storage golang module for CRUD access. Currently three ignorelists are defined in the config file with their respective parameters. These are implemented as Sorted Sets in Redis.
  • For those who only want to stand up Open Match and aren't interested in individually tweaking the required kubernetes resources, there are now three YAML files that can be used to install Redis, install Open Match, and (optionally) install Prometheus. You'll still need the sed instructions from the Developer Guide to substitute in the name of your Docker container registry.
  • A super-simple module has been created for doing instersections, unions, and differences of lists of player IDs. It lives in internal/set/set.go.

Roadmap

  • It has become clear from talking to multiple users that the software they write to talk to the Backend API needs a name. 'Backend API Client' is technically correct, but given how many APIs are in Open Match and the overwhelming use of 'Client' to refer to a Game Client in the industry, we're currently calling this a 'Director', as its primary purpose is to 'direct' which profiles are sent to the backend, and 'direct' the resulting MatchObjects to game servers. Further discussion / suggestions are welcome.
  • We'll be entering the design stage on longer-running MMFs before the end of the year. We'll get a proposal together and on the github repo as a request for comments, so please keep your eye out for that.
  • Match profiles providing multiple MMFs to run isn't planned anymore. Just send multiple copies of the profile with different MMFs specified via the backendapi.
  • Redis Sentinel will likely not be supported. Instead, replicated instances and HAProxy may be the HA solution of choice. There's an outstanding issue to investigate and implement if it fills our needs, feel free to contribute!