title | linkTitle | weight | date |
---|---|---|---|
Frequently Asked Questions |
FAQ |
400 |
2020-04-06 |
Agones creates a backing Pod with the appropriate configuration parameters for each GameServer that is configured in the cluster. They both have the same name if you are ever looking to match one to the other.
Yes.
Agones is inherently un-opinionated about the lifecycle of your game. When you call [SDK.Allocate()]({{< ref "/docs/Guides/Client SDKs/_index.md#allocate" >}}) you are protecting that GameServer instance from being scaled down for the duration of the Allocation. Typically, you would run one game session within a single allocation. However, you could allocate, and run N sessions on a single GameServer, and then de-allocate/shutdown at a later time.
If you wish to return an Allocated
GameServer to the Ready
state, you can use the
[SDK.Ready()]({{< ref "/docs/Guides/Client SDKs/_index.md#ready" >}}) command whenever it
makes sense for your GameServer to return to the pool of potentially Allocatable and/or scaled down GameServers.
Have a look at the integration pattern ["Reusing Allocated GameServers for more than one game session"]({{% ref "/docs/Integration Patterns/reusing-gameservers.md" %}}) for more details.
- Integrate your game server binary with the [Agones SDK]({{< ref "/docs/Guides/Client SDKs/_index.md" >}}), calling the appropriate [lifecycle event]({{< ref "/docs/Guides/Client SDKs/_index.md#lifecycle-management" >}}) hooks.
- Containerize your game server binary with Docker
- Publish your Docker image in a container registry/repository.
- Create a [gameserver.yaml]({{< ref "/docs/Reference/gameserver.md" >}}) file for your container image.
- Test your gameserver.yaml file.
- Consider utilizing [Fleets]({{< ref "/docs/Reference/fleet.md" >}}). and [Autoscalers]({{< ref "/docs/Reference/fleetautoscaler.md" >}}) for deploying at scale.
- In-Engine
- Integrate the SDK directly with the dedicated game server, such that it is part of the same codebase.
- Sidecar
- Use a Kubernetes sidecar pattern to run the SDK in a separate process that runs alongside your game server binary, and can share the disk and network namespace. This game server binary could expose its own API, or write to a shared file, that the sidecar process integrates with, and can then communicate back to Agones through the SDK.
- Wrapper
- Write a process that wraps the game server binary, and intercepts aspects such as the foreground log output, and use that information to react and communicate with Agones appropriately. This can be particularly useful for legacy game servers or game server binaries wherein you do not have access to the original source. You can see this in both the {{< ghlink href="examples/xonotic" >}}Xonotic{{< /ghlink >}} and {{< ghlink href="examples/supertuxkart" >}}SuperTuxKart{{< /ghlink >}} examples.
Either utilise the [REST API]({{< ref "/docs/Guides/Client SDKs/rest.md" >}}), which can be [generated from the Swagger specification]({{< ref "/docs/Guides/Client SDKs/rest.md#generating-clients" >}}), or [generate your own gRPC client from the proto file]({{< ref "/docs/Guides/Client SDKs/_index.md" >}}).
Game Server SDKs are a thin wrapper around either REST or gRPC clients, depending on language or platform, and can be used as examples.
A GameServerAllocation
has a [spec.metadata section]({{< ref "/docs/Reference/gameserverallocation.md" >}}),
that will apply any configured Labels
and/or Annotations to a requested
GameServer at Allocation time.
The game server binary can watch for the state change to Allocated
, as well as changes to the GameServer metadata,
through [SDK.WatchGameServer()]({{< ref "/docs/Guides/Client SDKs/_index.md#watchgameserverfunctiongameserver" >}}).
Combining these two features allows you to pass information such as map data, gameplay metadata and more to a game server binary at Allocation time, through Agones functionality.
Do note, that if you wish to have either the labels or annotations on the GameServer
that are set via a
GameServerAllocation
to be editable by the game server binary with the Agones SDK, the label key will need to
be prefixed with agones.dev/sdk-
.
See [SDK.SetLabel()]({{< ref "/docs/Guides/Client SDKs/_index.md#setlabelkey-value" >}})
and [SDK.SetAnnotation()]({{< ref "/docs/Guides/Client SDKs/_index.md#setannotationkey-value" >}}) for more information.
The Agones game server SDK allows you to set custom Labels and Annotations through the [SDK.SetLabel()]({{< ref "/docs/Guides/Client SDKs/_index.md#setlabelkey-value" >}}) and [SDK.SetAnnotation()]({{< ref "/docs/Guides/Client SDKs/_index.md#setannotationkey-value" >}}) functionality respectively.
This information is then queryable via the [Kubernetes API]({{< ref "/docs/Guides/access-api.md" >}}), and can be used for game specific, custom integrations.
If my game server requires more states than what Agones provides (e.g. Ready, Allocated, Shutdown, etc), can I add my own?
If you want to track custom game server states, then you can utilise the game server client SDK [SDK.SetLabel()]({{< ref "/docs/Guides/Client SDKs/_index.md#setlabelkey-value" >}}) and [SDK.SetAnnotation()]({{< ref "/docs/Guides/Client SDKs/_index.md#setannotationkey-value" >}}) functionality to expose these custom states to outside systems via your own labels and annotations.
This information is then queryable via the [Kubernetes API]({{< ref "/docs/Guides/access-api.md" >}}), and can be used for game specific state integrations with systems like matchmakers and more.
Custom labels could also potentially be utilised with [GameServerAllocation required and/or preferred label
selectors]({{< ref "/docs/Reference/gameserverallocation.md" >}}), to further refine Ready
GameServer
selection on Allocation.
The answer to this question is "it depends" 😁.
As a rule of thumb, we recommend clusters no larger than 500 nodes, based on production workloads.
That being said, this is highly dependent on Kubernetes hosting platform, control plane resources, node resources, requirements of your game server, game server session length, node spin up time, etc, and therefore you should run your own load tests against your hosting provider to determine the optimal cluster size for your game.
We recommend running multiple clusters for your production GameServer workloads, to spread the load and provide extra redundancy across your entire game server fleet.
Each GameServer
inherits the IP Address of the Node on which it resides. If it can find an ExternalIP
address on
the Node (which it should if it's a publicly addressable Node), that it utilised, otherwise it falls back to using the
InternalIP
address.
[You can make this available by using the feature flag.]({{< ref "/docs/Guides/feature-stages.md" >}})
Agones uses an IP address as the game server address by default.
This works fine in most cases, but can be a problem if your game server and game client are running on different IP protocols.
e.g) The game server is connected only to the IPv4 network, and the game client is connected only to the IPv6 network.
When this feature is enabled, Agones will preferentially use the External DNS of the Node on which the GameServer Pod is running.
Since the game client can get the domain name instead of the IP address, it will be able to communicate with the game server via DNS64 and NAT64.
Traffic is routed to the GameServer Container utilising the hostPort
field on a
[Pod's Container specification]({{< k8s-api href="#containerport-v1-core" >}}).
This opens a port on the host Node and routes traffic to the container via iptables or ipvs, depending on host provider and/or network overlay.
In worst case scenarios this routing can add an extra 0.5ms latency to UDP packets, but that is extremely rare.
The decision was made not to use hostNetwork
, as the benefits of having isolated network namespaces between
game server processes give us the ability to run
sidecar containers, and provides an extra layer of
security to each game server process.
We routinely see users running container images that are multiple GB in size.
The only downside to larger images, is that they can take longer to first load on a Kubernetes node, but that can be managed by your [Fleet]({{< ref "/docs/Reference/fleet.md" >}}) and [Fleet Autoscaling]({{< ref "/docs/Reference/fleetautoscaler.md" >}}) configuration to ensure this load time is taken into account on a new Node's container initial load.
When running Agones on GKE, we have verified that an Agones cluster can start up to 10,000 GameServer instances per minute (not including node creation).
This number could vary depending on the underlying scaling capabilities of your cloud provider, Kubernetes cluster configuration, and your GameServer Ready startup time, and therefore we recommend you always run your own load tests for your specific game and game server containers.
As of Kubernetes 1.14, Windows Container support has been released as GA.
That being said, Agones has yet to be tested with Windows Nodes and work on this feature has not been started.
If you are interested in this feature and/or contributing, please add a comment to the Running windows game server ticket.
Yes! There are several! Check out both our [official]({{% ref "/docs/Examples/_index.md#integration-with-open-match" %}}) and [third party]({{% ref "/docs/Third Party Content/examples.md#integration-with-open-match" %}}) examples!