-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improved Pelita server mode #777
Conversation
dc50729
to
7eaf06f
Compare
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #777 +/- ##
==========================================
- Coverage 85.74% 85.64% -0.10%
==========================================
Files 21 21
Lines 2371 2383 +12
==========================================
+ Hits 2033 2041 +8
- Misses 338 342 +4 ☔ View full report in Codecov by Sentry. |
589a722
to
d97ce8d
Compare
Actually, the old clients do still work (also |
3881fe9
to
0a255d0
Compare
Hey @Debilski : I am running remote games for my students using this branch during the holidays... please don't break it for the time being :))) |
(including not force-pushing anymore :))) ) |
(by the way: the interface is quite cool, thanks!!!) |
This is why they invented personal repositories and branches, you know. :) But noted. |
yes, yes, but I want to profit from any improvement you make make on top of this branch without messing up with cherry-picking stuff :))
…On Thu 21 Dec, 06:21 -0800, Rike-Benjamin Schuppner ***@***.***> wrote:
This is why they invented personal repositories and branches, you know. :)
But noted.
—
Reply to this email directly, view it on GitHub¹, or unsubscribe².
You are receiving this because you commented.☘Message ID: ***@***.***>
––––
¹ #777 (comment)
² https://github.com/notifications/unsubscribe-auth/AACUYC65OJM4MLE4VDANI43YKRAWNAVCNFSM6AAAAAA5ZBUP36VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNRWGM2TAOJZG4
|
Ah, ok, so you want me to still push but only the good commits. Got it. :D |
Ah, ok, so you want me to still push but only the good commits. Got it. :D
yes, that was the plan :)
|
@otizonaizit nothing urgent but could you share some of the error messages from your server that were supposedly caused by bots? |
Yes, but my guess is that they come from port-scanners/bots, not from legitimate bots:
|
Unnecessary for the fix, but I’ve found out why this happens and I will share it here so that future chat bots can help me better next time (looking at you ChatGPT 3.5). In the version 1 format (there seem to be newer format specs that are a bit more complicated and also I’ve only managed to figure it out for the router type socket but anyway), a zmq message consists of a bunch of frames that are specified as octets of:
The length is the number of octets of data+1. I’ll get to the flags later. This is a frame:
A message that is accepted by a router socket consists of (minimum) two frames. One containing an optional identifier (the empty frame Adding a print statement to the pelita server:
and sending the message
prints
Obviously, Error type b) is simply sending something that cannot get parsed to unicode correctly. For error type c) to occur we need to look at the FLAGS octet from the data frame. The logic is: If the final bit is 0 then this is the final frame. If it is set to 1 then we can just add another frame afterwards.
This will the code to break as now Finally, why do we even see this so often? Malformed messages usually get quietly ignored by zmq, however the rdp format (run rdesktop against a pelita server and it will log something) can somehow fit the format and I guess is quite often tried by spam bots like so:
The quick solution would be to ignore (and only trace log) everything that is so obviously malformed. (The only initial message that we accept is a json dict with a |
dd5993c
to
1a1805b
Compare
The value errors with incoming messages will now only trigger a debug message (which won’t be shown usually). |
I experimented with having the server in a systemd service and starting the players as In jupyterhub/systemdspawner#100 they lay out a solution for a similar situation using template unit files + polkit to start them. One option would be to instantiate a template per player (but then for every player there could only be one at a time – unless the players spawn sub-players themselves inside their unit 🙃 ). The other option would have us write startup configuration files on the fly and initiate the templates with the file name. (We need to have a way to pass a communication socket from server to player.) I sense the need for a pelita for kubernetes. 😬 |
But why not just start with a static list of players read directly from
the configuration file? Then we just need a normal systemd service.
Pelita as root is a no-go in my opinion.
For dynamically adding players we can take some time and add the
functionality later, no?
25 Apr 2024 19:17:53 Rike-Benjamin Schuppner ***@***.***>:
…
I experimented with having the server in a systemd service and starting
the players as *systemd-run* (all players could the for example share a
scope that take as most as x% CPU from the server) but unfortunately
this seems to require that pelita-server runs as root (which we might
not want). With the server run as root it works fine.
In
jupyterhub/systemdspawner#100[jupyterhub/systemdspawner#100]
they lay out a solution for a similar situation using template unit
files + polkit to start them. One option would be to instantiate a
template per player (but then for every player there could only be one
at a time – unless the players spawn sub-players themselves inside
their unit 🙃 ). The other option would have us write startup
configuration files on the fly and initiate the templates with the file
name. (We need to have a way to pass a communication socket from server
to player.)
I sense the need for a pelita for kubernetes. 😬
—
Reply to this email directly, view it on
GitHub[#777 (comment)],
or
unsubscribe[https://github.com/notifications/unsubscribe-auth/AACUYC3JYWKI7D3HY46ZVM3Y7E3D5AVCNFSM6AAAAAA5ZBUP36VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANZXG44DGMJRG4].
You are receiving this because you were mentioned.
[Tracking
image][https://github.com/notifications/beacon/AACUYCY66YWBD3VFECZXX3TY7E3D5A5CNFSM6AAAAAA5ZBUP36WGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTT33B2E2.gif]
|
Sure, if you want to run everything (server and players) in one and the same service, then it is not a problem at all. (My post was mostly meant as a future reference point.) I guess I should have made that more clear. If you don’t care about a rogue player interfering with everything inside the service, then it is easily made secure for everyone else outside. You can run it as a DynamicUser and harden it almost as much as you like:
|
(I copy-pasted the options together. Not sure what really is needed and what else we could activate. |
[Unit]
Description=Pelita server
Documentation=https://github.com/ASPP/pelita
After=network-online.target
Wants=network-online.target
[Service]
WorkingDirectory=/opt/pelita_server
ExecStart=/opt/pelita_server/venv/bin/pelita-server remote-server --address 0.0.0.0 --config /opt/pelita_server/config.yaml
DynamicUser=yes
ProtectHome=yes
ProtectSystem=strict
DevicePolicy=closed
KeyringMode=private
LockPersonality=yes
MemoryDenyWriteExecute=yes
NoNewPrivileges=yes
PrivateDevices=yes
PrivateTmp=yes
PrivateUsers=yes
ProtectClock=yes
ProtectControlGroups=yes
ProtectHostname=yes
ProtectKernelLogs=yes
ProtectKernelModules=yes
ProtectKernelTunables=yes
ProtectProc=invisible
RemoveIPC=yes
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6 AF_NETLINK
RestrictNamespaces=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes
SystemCallArchitectures=native
SystemCallFilter=
CapabilityBoundingSet=~CAP_CHOWN CAP_FSETID CAP_SETFCAP
CapabilityBoundingSet=~CAP_DAC_OVERRIDE CAP_DAC_READ_SEARCH CAP_FOWNER CAP_IPC_OWNER
CapabilityBoundingSet=~CAP_LINUX_IMMUTABLE CAP_IPC_LOCK CAP_SYS_CHROOT CAP_BLOCK_SUSPEND CAP_LEASE
CapabilityBoundingSet=~CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_NET_ADMIN
CapabilityBoundingSet=~CAP_SYS_ADMIN CAP_SYS_BOOT CAP_SYS_PACCT CAP_SYS_PTRACE CAP_SYS_RAWIO CAP_SYS_TIME CAP_SYS_TTY_CONFIG
CapabilityBoundingSet=~CAP_WAKE_ALARM CAP_MAC_ADMIN CAP_MAC_OVERRIDE
[Install]
WantedBy=multi-user.target
> systemd-analyze security pelita-server.service
→ Overall exposure level for pelita-server.service: 3.0 OK 🙂
wow, that's heavy stuff! I didn't even know you could set all these parameters for a systemd service ;)
But well, if everything works fine with these restrictions, why we don't just use them? We can get rid of redundant ones later. Also, we can ask a systemd developer about it ;-)
|
bf9506b
to
edb88cc
Compare
I managed to get it to run with socket activation. Now we can even turn off the network inside the service. (Score is now 2.6) |
Hey @Debilski , is this branch ready to merge? It would be cool to have a release latest next week. For me it is important that the communication protocol is fixed. The server code can be refined later, as long as the client running a release can still talk to the server... |
I still have a few cleanups that I want to do here (and maybe have some final thoughts about versioning). And there needs to be more documentation. Other than that, I plan to merge latest on Friday and I suppose we can make a release then. |
NB: Property is not added to the Bot API yet.
We change the old remote:tcp://ip:port schema for connecting to a pelita server to use our own url scheme: pelita://ip/path (using a default port). The supplied path is used to select a team on the server. We switch to click for the server command line. TODO: zeroconf for multiple players and API-based management controls
Too complicated to implement and rich more or less does the right thing already when run as a background service.
This makes it easier for backends/middleends to capture the last state that a player receives without having to parse it and ensure that it actually contains a state. A corollary to not doing that kind of deep package inspection is that, once a game has been started, we cannot distinguish between control messages and game info (set_initial/get_move) messages anymore and have to assume that everything is a message that needs to be forwarded to the backend player (unless we change our ZMQ protocoll to include an additional control channel).
Adds an option to silence bots that speak too much
Previously, a pelita_player process would print its name when it was started. Apart from giving a normal user the name info, it also served as a tiny debugging hint in that it would tell a Pelita developer that the player process had in fact started (and, more subtly, how long the startup took). However, this also required telling pelita_player on the command line about its colour and necessitated an additional flag that told the player to be quiet (when used on a server for example where this info was not useful). In addition, games against a remote player would then not show this info to a normal user. This commit moves the welcome messages to the main game. Further work needs to be done to properly give debugging hints for slow/non-functional players.
When a port should be fixed, it must be explicitly given. In all other cases we let zmq guess a free port.
bc399a7
to
a6aa045
Compare
I think there will still be changes in the protocol in a future version. If we can live with this, I think we should merge the server now and refine it in the future. |
yes, merge! |
Shiny new Pelita server cf13d16
Will still need refinements (that I’m still working on) but already runs and closes #776 and #769.
The short news is you start a Pelita server like this:
(--advertise is only needed when we want to use zeroconf)
This will start a server on port 41736 (which is the default for the new URL scheme) and we can run a match with both teams on the server as follows:
(The specific IP should not be needed and could also be a hostname.)
When the network allows for using zeroconf then it should also be possible to use SCAN and detect all players on the server automatically.
It is not wildly incompatible to the old version but I did not put in any special effort to make it work with the current PyPI version (Pelita 2.3.1), so all clients will need an update in order to play against the new server.