Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

100% cpu in mainthread due to not closing properly? (channel.connected == False) #418

Open
djay opened this issue Sep 11, 2023 · 94 comments · May be fixed by #419 or #435
Open

100% cpu in mainthread due to not closing properly? (channel.connected == False) #418

djay opened this issue Sep 11, 2023 · 94 comments · May be fixed by #419 or #435

Comments

@djay
Copy link

djay commented Sep 11, 2023

Following on from debugging in this issue - collective/haufe.requestmonitoring#15

What we see is waitress switching into 100% CPU and staying there. It is happening in production randomly (within a week) and we haven't traced it back to a certain request).

Using a sampling profiler on waitress with 2 threads (in prod) we identified the thread using the CPU as the mainthread (top -H) and this is the profile. Note that since this is prod there are other requests so not all activity is related to the looping causing this bug.

 Austin  TUI   Wall Time Profile                                                                                             CPU  99% ▇█▇█▇▇▇▇   MEM 263M ████████    5/5
   _________   Command /app/bin/python3.9 /app/parts/instance/bin/interpreter /app/eggs/Zope-5.6-py3.9.egg/Zope2/Startup/serve.py /app/parts/instance/etc/wsgi.ini
   ⎝__⎠ ⎝__⎠   Python 3.9.0     PID 3351466     PID:TID 3351466:11          
               Samples 1451365  ⏲️   3'16"       Threshold 0%   
  OWN    TOTAL    %OWN   %TOTAL  FUNCTION                                                                                                                                     
  00"    3'16"     0.0%  100.0%  └─ <module> (/app/parts/instance/bin/interpreter:326)                                                                                       ▒
  00"    3'16"     0.0%  100.0%     └─ <module> (/app/eggs/Zope-5.6-py3.9.egg/Zope2/Startup/serve.py:255)                                                                    ▒
  00"    3'16"     0.0%  100.0%        └─ main (/app/eggs/Zope-5.6-py3.9.egg/Zope2/Startup/serve.py:251)                                                                     ▒
  00"    3'16"     0.0%  100.0%           └─ run (/app/eggs/Zope-5.6-py3.9.egg/Zope2/Startup/serve.py:217)                                                                   │
  00"    3'16"     0.0%  100.0%              └─ serve (/app/eggs/Zope-5.6-py3.9.egg/Zope2/Startup/serve.py:203)                                                              │
  00"    3'16"     0.0%  100.0%                 └─ serve (/app/eggs/plone.recipe.zope2instance-6.11.0-py3.9.egg/plone/recipe/zope2instance/ctl.py:942)                       │
  00"    3'16"     0.0%  100.0%                    └─ serve_paste (/app/eggs/plone.recipe.zope2instance-6.11.0-py3.9.egg/plone/recipe/zope2instance/ctl.py:917)              │
  00"    3'16"     0.0%  100.0%                       └─ serve (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/__init__.py:19)                                                  │
  00"    3'16"     0.0%  100.0%                          └─ run (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/server.py:322)                                                  │
  05"    3'16"     2.5%   99.9%                             ├─ loop (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/wasyncore.py:245)                                           │
  36"     44"     18.3%   22.4%                             │  ├─ poll (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/wasyncore.py:158)                                        │
  05"     05"      2.4%    2.4%                             │  │  ├─ readable (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/server.py:290)                                    │
  01"     02"      0.4%    0.9%                             │  │  ├─ write (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/wasyncore.py:117)                                    │
  01"     01"      0.4%    0.5%                             │  │  │  ├─ handle_write_event (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/wasyncore.py:517)                    │
  00"     00"      0.0%    0.0%                             │  │  │  │  ├─ handle_write (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/channel.py:98)                          │
  00"     00"      0.0%    0.0%                             │  │  │  │  └─ handle_write (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/channel.py:95)                          │
  00"     00"      0.0%    0.0%                             │  │  │  ├─ handle_write_event (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/wasyncore.py:514)                    │
  00"     00"      0.0%    0.0%                             │  │  │  ├─ handle_write_event (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/wasyncore.py:515)                    │
  00"     00"      0.0%    0.0%                             │  │  │  │  └─ handle_write (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/channel.py:98)                          │
  00"     00"      0.0%    0.0%                             │  │  │  ├─ poll (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/wasyncore.py:150)                                  │
  00"     00"      0.0%    0.0%                             │  │  │  │  └─ handle_write (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/channel.py:98)                          │
  00"     00"      0.0%    0.0%                             │  │  │  └─ handle_write_event (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/wasyncore.py:509)                    │
  00"     00"      0.0%    0.0%                             │  │  │     └─ handle_write (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/channel.py:98)                          │
  00"     00"      0.0%    0.1%                             │  │  ├─ write (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/wasyncore.py:113)                                    │
  00"     00"      0.1%    0.1%                             │  │  │  ├─ handle_write_event (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/wasyncore.py:517)                    │
  00"     00"      0.0%    0.0%                             │  │  │  │  └─ handle_write (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/channel.py:98)                          │
  00"     00"      0.0%    0.0%                             │  │  │  └─ handle_write_event (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/wasyncore.py:509)                    │
  00"     00"      0.1%    0.1%                             │  │  ├─ readable (/app/eggs/waitress-2.1.2-py3.9.egg/waitress/channel.py:154)   

from profiling it looks like channel is writable but the channel.connected == False.
So then it goes into a loop without writing or closing since it never actually does anything to the socket.
https://github.com/Pylons/waitress/blob/main/src/waitress/channel.py#L98

EDIT: My suspicion would be that what started this was a client that shutdown (half) very quickly after a connect and this happened before the dispatcher finished being setup. This causes getpeername to fail with EINVAL and connected = False.

self.connected = False

            try:
                self.addr = sock.getpeername()
            except OSError as err:
                if err.args[0] in (ENOTCONN, EINVAL):
                    # To handle the case where we got an unconnected
                    # socket.
                    self.connected = False
                else:
                    # The socket is broken in some unknown way, alert
                    # the user and remove it from the map (to prevent
                    # polling of broken sockets).
                    self.del_channel(map)
                    raise

Could be same issue as #411 but hard to tell.

One fix in #419 but could be better ways?

@djay
Copy link
Author

djay commented Sep 11, 2023

@d-maurer

The error is where self.connected was set tu False.
There, it should have been ensured that the corresponding "fileno"
is removed von socket_map and that it will not be put there again
(as long as self.connected remains False).

Something exceptional must have brought waitress into this state
(otherwise, we would have lots of 100 % CPU usage reports).
I assume that some bad client has used the system call shutdown
to close only part of the socket connection and that waitressdoes not
anticipate something like that.

Waitress does seem to properly close if shutdown is received (empty data).
see https://github.com/Pylons/waitress/blob/main/src/waitress/wasyncore.py#L449

So have to keep looking for a way connected can be false but it can still be trying to write.
Yes it is most likely bad actors. We get hit by this a lot in our line of business.

@d-maurer
Copy link

d-maurer commented Sep 11, 2023 via email

@d-maurer
Copy link

d-maurer commented Sep 11, 2023 via email

@djay
Copy link
Author

djay commented Sep 11, 2023

@d-maurer yes it could work to insert a self.del_channel(self) in

@djay
Copy link
Author

djay commented Sep 11, 2023

@d-mauer I created a PR and it passes the current tests and they hit that line but hard to know how to make a test for this scenario...

#419

@d-maurer
Copy link

d-maurer commented Sep 11, 2023 via email

@djay
Copy link
Author

djay commented Sep 12, 2023

@mcdonc another solution instead of #419 might be below. is that preferable?

def poll(timeout=0.0, map=None):
    if map is None:  # pragma: no cover
        map = socket_map
    if map:
        r = []
        w = []
        e = []
        for fd, obj in list(map.items()):  # list() call FBO py3
            # prevent getting into a loop for sockets disconnected but not properly closed.
            if obj.check_client_disconnected():
                obj.del_channel()
                continue

perhaps you have a better idea on how it could have got into this knot and the best way to test?

@djay
Copy link
Author

djay commented Sep 12, 2023

@mcdonc one code path that could perhaps lead to this is

# the user and remove it from the map (to prevent

since connecting == False also there doesn't seem to be a way for it to write data out or close?

EDIT: one scenario could be the client half disconnected very quickly before the dispatcher was setup so getpeername fails? but somehow the socket still can be written to?

@djay
Copy link
Author

djay commented Sep 12, 2023

Looks like it is possible a connection thats been broken before getpeername to then no have any error in select. in the case where there is nothing to read since that will result in a close. https://stackoverflow.com/questions/13257047/python-select-does-not-catch-broken-socket. not sure how it has something write in that case. maybe shutdown for readonly very quickly?

EDIT: https://man7.org/linux/man-pages/man3/getpeername.3p.html

"EINVAL The socket has been shut down." <- so looks like shutdown for read very quickly seems possible to create this tight loop.

@djay
Copy link
Author

djay commented Sep 12, 2023

or somethow the getpeername is invalid and that results in a oserror. and there is nothing to read but something to write. but I'm not sure if that results in the EINVAL or not.

@d-maurer
Copy link

d-maurer commented Sep 12, 2023 via email

@d-maurer
Copy link

d-maurer commented Sep 12, 2023 via email

@djay
Copy link
Author

djay commented Sep 12, 2023

@d-maurer I'm fairly sure I have one solid explanation how this could occur.

Outlined in this test - https://github.com/Pylons/waitress/pull/419/files#diff-5938662f28fcbb376792258701d0b6c21ec8a1232dada6ad2ca0ea97d4043d96R775

NOTE: I haven't worked out a way to detect the looping in a test yet. So the assert at the end is not correct.

It is as you say. There is a shutdown of the read only but this is a race condition. it has to happen before the dispatcher is created so right after the connect. I've confirmed this results in an getpeername returning OSError EINVAL and thus connected = False and the select still thinks it can write so the loop will be inifinite. or maybe until the bad actor properly closes the connection. not sure on that one.

In the long run waitress should likely change its "connected" concept.
HTTP is based on TCP which implements bidirectional communication channels.
The shutdown system call allows applications to shut down individual
directions. This is not compatible with a boolean "connected",
instead we have 4 connection states: (fully) disconnected,
read connected, write connected and (fully) connected.

true. but if I'm right on the cause of this this, the socket would never have connected=False with most shutdowns. Only when it happens too quickly. That flag is mostly used to indicate not yet connected or in the process of closing.

My favorite workaround (ensure "writable" returns "False" when "not connected")
is so simple that no test is necessary to convince us that
the busy loop is avoided.

yes that will also work. I'll switch it to that.
There is a system to remove inactive sockets so I guess that would get them closed eventually.
I'm not really sure the pros and cons of having sockets left open vs the consequences of just closing them for this case (I tried this. it also worked in terms of the tests).

@djay
Copy link
Author

djay commented Sep 12, 2023

@d-maurer I pushed new code that uses writable instead.

@d-maurer
Copy link

d-maurer commented Sep 12, 2023 via email

@djay
Copy link
Author

djay commented Sep 13, 2023

@d-maurer maybe a core contributor can step in and advise the best solution and test. @digitalresistor @kgaughan ?

@djay djay changed the title 100% cpu in mainthread due to not closing properly? 100% cpu in mainthread due to not closing properly? (channel.connected == False) Sep 13, 2023
@d-maurer
Copy link

d-maurer commented Sep 13, 2023 via email

@djay
Copy link
Author

djay commented Sep 13, 2023

@d-maurer that was my initial thought but as I pointed out in #418 (comment) recv in wasynccore will do handle_close on getting empty data and take it out of the map so I couldn't see any way for no bytes being sent to cause this loop.

    def recv(self, buffer_size):
        try:
            data = self.socket.recv(buffer_size)
            if not data:
                # a closed connection is indicated by signaling
                # a read condition, and having recv() return 0.
                self.handle_close()
                return b""
            else:
                return data
        except OSError as why:
            # winsock sometimes raises ENOTCONN
            if why.args[0] in _DISCONNECTED:
                self.handle_close()
                return b""
            else:
                raise

@djay
Copy link
Author

djay commented Sep 13, 2023

Also when I did some testing it did seem like the select would indicate a write was possible even without the back end producing any data. So there is no read needed. Just a connect and very quick shutdown. But I do have to work out a proper test for that.

@d-maurer
Copy link

d-maurer commented Sep 13, 2023 via email

@d-maurer
Copy link

d-maurer commented Sep 13, 2023 via email

@d-maurer
Copy link

d-maurer commented Sep 13, 2023 via email

@djay
Copy link
Author

djay commented Sep 13, 2023

@d-maurer that would be a different bug in waitress.

My problem is I run out of CPU on my servers if I don't restart them often due to these weird requests we are receiving. That no one else is the world seems to get :(

@d-maurer
Copy link

d-maurer commented Sep 13, 2023 via email

@d-maurer
Copy link

d-maurer commented Sep 13, 2023 via email

@d-maurer
Copy link

d-maurer commented Sep 13, 2023 via email

@d-maurer
Copy link

"EINVAL The socket has been shut down." <- so looks like shutdown for read very quickly seems possible to create this tight loop.

Note that a socket has 2 ends. "The socket has been shut down" might refer to the local (not the remote) end.

@djay
Copy link
Author

djay commented Sep 18, 2023

@d-maurer

When the socket buffer is full, select should not signal "ready to write".

But that is what your strace has shown:

  • the connection is not interested in read (i.e. "not readable()")
  • the connection is interested in write (i.e. "writable()")
  • select signals "ready to write"
  • no output is tried

to be both not readable and writable it is possible if there is a total_outbufs_len > 0, ie that there is more data left to write than was sent. The only way that can happen is one big read of a series of headers pipelined, ie for seperate requests and each is "empty". That sets up a situation where the self.send_continue() keeps happening over and over and that seems to be the only way you can send when not self.connected.

    def received(self, data):
        """
        Receives input asynchronously and assigns one or more requests to the
        channel.
        """

        if not data:
            return False

        with self.requests_lock:
            while data:
                if self.request is None:
                    self.request = self.parser_class(self.adj)
                n = self.request.received(data)

                # if there are requests queued, we can not send the continue
                # header yet since the responses need to be kept in order

                if (
                    self.request.expect_continue
                    and self.request.headers_finished
                    and not self.requests
                    and not self.sent_continue
                ):
                    self.send_continue()

                if self.request.completed:
                    # The request (with the body) is ready to use.
                    self.sent_continue = False

                    if not self.request.empty:
                        self.requests.append(self.request)

                        if len(self.requests) == 1:
                            # self.requests was empty before so the main thread
                            # is in charge of starting the task. Otherwise,
                            # service() will add a new task after each request
                            # has been processed
                            self.server.add_task(self)
                    self.request = None

                if n >= len(data):
                    break
                data = data[n:]

        return True

So if those sends don't fail with an OSError but instead only partially send or get EWOULDBLOCK then some data is left to send and assuming the requests finished at that point then then it could be stuck in the loop.
If as you say, select won't select a write if the socket send buffer is full then this all falls apart.

send_continue() is the only way I can see of total_outbufs_len being > 0.

Maybe there is some way to achieve this loop by making close_when_flushed == True...

@d-maurer
Copy link

d-maurer commented Sep 18, 2023 via email

@d-maurer
Copy link

d-maurer commented Sep 18, 2023 via email

@djay
Copy link
Author

djay commented Sep 18, 2023

@d-maurer I think I can reproduce is reasonably simply with close_when_flushed.

The request just need to have an error in it. Then when service is called

    def service(self):
        """Execute one request. If there are more, we add another task to the
        server at the end."""

        request = self.requests[0]

        if request.error:
            task = self.error_task_class(self, request)
        else:
            task = self.task_class(self, request)

        try:
            if self.connected:
                task.service()
            else:
                task.close_on_finish = True
        except ClientDisconnected:
...
        if task.close_on_finish:
            with self.requests_lock:
                self.close_when_flushed = True

                for request in self.requests:
                    request.close()
                self.requests = []

We get an new error task that has close_on_finish. That then seems to create a loop by setting close_when_flushed since it never gets to write that flush due to not being in connected state.

I've updated my test to show this working

https://github.com/djay/waitress/blob/23e7b05748d1909afd73efb83290274c90b3051d/tests/test_server.py#L439

Not yet sure if I'm calling service at a realistic time or not.

@d-maurer
Copy link

d-maurer commented Sep 18, 2023 via email

@djay
Copy link
Author

djay commented Sep 18, 2023

@d-maurer

  1. if it is possible in the not connected state
    to service a task sufficiently fast
    that it sets channel.close_when_flushed before
    the next poll, then the state above is entered.

Speed isn't needed. It will sit there waiting since the request is finished so it's got nothing to write or read. At least if self.adj.channel_request_lookahead == 0. Which I guess that setting is designed to prevent this case so I will try setting it.

@digitalresistor
Copy link
Member

No, channel_request_lookahead is used to allow waitress to try and read more than the first request (normally it would read a request, wait for a response to be created/processed, only then read the next request, so that it wouldn't read a bunch of data/HTTP pipelining requests from the wire without being able to respond for example if the connection was goin to be closed early)... this however meant there was no good way to let the app know that the client disconnected for long running requests that wanted to check before they committed data to the database for example.

That is why waitress has a waitress.client_disconnected callable in the environment that allows the app to check if the remote client has disconnected, but that only works if we continue to read() from the socket and thus get notified when the remote hangs up.

@djay
Copy link
Author

djay commented Sep 18, 2023

@d-maurer maybe but it will result in the socket being closed since it will allow it to read the RST. Thats also how it works in the test. I'll try it in production.

But this is still a bug. I've put in two fixes.

    def handle_write(self):
        # Precondition: there's data in the out buffer to be sent, or
        # there's a pending will_close request

        if not self.connected and not self.close_when_flushed:
            # we dont want to close the channel twice
            return
    def __init__(self, server, sock, addr, adj, map=None):
        self.server = server
        self.adj = adj
        self.outbufs = [OverflowableBuffer(adj.outbuf_overflow)]
        self.creation_time = self.last_activity = time.time()
        self.sendbuf_len = sock.getsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF)

        # requests_lock used to push/pop requests and modify the request that is
        # currently being created
        self.requests_lock = threading.Lock()
        # outbuf_lock used to access any outbuf (expected to use an RLock)
        self.outbuf_lock = threading.Condition()

        wasyncore.dispatcher.__init__(self, sock, map=map)
        if not self.connected:
            # Sometimes can be closed quickly and getpeername fails.
            self.handle_close()

        # Don't let wasyncore.dispatcher throttle self.addr on us.
        self.addr = addr
        self.requests = []

@djay
Copy link
Author

djay commented Sep 20, 2023

@d-maurer I've also worked out why maintenance never close the connection and put in a fix for that too.
Unfortunately there doesn't seem to be a test for this close called twice case so still not clear why that line that caused the loop is there in the first place and if isn't better just to get rid of it.

@djay
Copy link
Author

djay commented Sep 20, 2023

Well this is at least the commit that put this in
9a7d2e5

@d-maurer
Copy link

d-maurer commented Sep 20, 2023 via email

@d-maurer
Copy link

d-maurer commented Sep 20, 2023 via email

@d-maurer
Copy link

d-maurer commented Sep 20, 2023 via email

@djay
Copy link
Author

djay commented Oct 20, 2023

The current PR has been running in production of a couple of weeks without the bug reoccuring so that at least confirms the source of the issue.
But I will still rewrite the PR with changes mentioned above

@digitalresistor
Copy link
Member

@djay I have not been able to reproduce the original error with the current released version of waitress. It is somewhat frustrating that I can't reproduce it.

@djay
Copy link
Author

djay commented Feb 5, 2024

@digitalresistor I merged in master, reversed my fixes and uncommented the test case code that reproduces it and it showed that it went into a loop still. How did you try and reproduce it?

@digitalresistor
Copy link
Member

@djay I have not been using the test code you created. I wanted to reproduce it without trying to set up the conditions to match exactly, or forcing a socket into a particular state. I have been testing with waitress running on a RHEL 8.9 system, I have been unable to get getpeername() to fail in those conditions even while trying to do the half-shutdown and stress testing waitress at the same time. I have not manually manipulated SO_LINGER to setup the conditions as specified, just using the defaults from the OS.

I also would expect to see far more instances of this error happening out in the wild, waitress is used extensively, and I have yet to see it happen on any of my applications that are running in production.

I don't disagree that there is an issue and we can solve it, but I find it interesting that it is not easily reproducible on test systems I have.

The current PR does the closing much later, I'd much rather see it get handled when getpeername() fails before we even attempt to do anything else with the socket, or start setting up a channel, if we can avoid doing extra work, let's avoid doing extra work.

Then we can rework the logic for writeable as well to make sure that we can't busy loop there.

@d-maurer
Copy link

d-maurer commented Feb 5, 2024 via email

@djay
Copy link
Author

djay commented Feb 5, 2024

@digitalresistor The circumstances are very particular. It was something we were getting from pentests only I believe.
An invalid request is sent and immediately closed by the client before the channel could be created. it would not be easy to reproduce as you would need the invalid request, and a loaded enough server that it not be so fast to create the channel.

The current PR does the closing much later, I'd much rather see it get handled when getpeername() fails before we even attempt to do anything else with the socket, or start setting up a channel, if we can avoid doing extra work, let's avoid doing extra work.

This part ensures its closed if getpeername fails

https://github.com/Pylons/waitress/pull/419/files#diff-5cb215c6142fa7daad673c77fcb7f3bc0a0630e18e3487a5ac0894e90f1b2071R71

And I added some other tests to show you how the other fix prevents any looping if for any other reasons channel.connected would become false and we have an error in the request. Even though that should not happen.

@digitalresistor
Copy link
Member

@djay I will take another look at this over the next week or so. I appreciate the follow-up!

@djay
Copy link
Author

djay commented Feb 13, 2024

@digitalresistor I wouldn't think about it too long. basically I've given a recipe to DDOS any waitress site. lock up one thread with a single request. This would happen every sunday with this repeated pentest. Once I deployed the fix it never happened again. Leaving a bug like this is in doesn't seem right to me.

digitalresistor added a commit that referenced this issue Mar 3, 2024
No longer call getpeername() on the remote socket either, as it is not
necessary for any of the places where waitress requires that self.addr
in a subclass of the dispatcher needs it.

This removes a race condition when setting up a HTTPChannel where we
accepted the socket, and know the remote address, yet call getpeername()
again which would have the unintended side effect of potentially setting
self.connected to False because the remote has already shut down part of
the socket.

This issue was uncovered in #418, where the server would go into a hard
loop because self.connected was used in various parts of the code base.
@digitalresistor
Copy link
Member

I've created a new PR that removes getpeername() entirely. It is not useful in the context of waitress, and cleaned up some code related to self.connected, since calling self.handle_close() multiple times is not an issue.

Could you drop this into your system instead and take a look if you still see the same issue: #435

@djay
Copy link
Author

djay commented Mar 4, 2024

@digitalresistor I can.
But are you sure that there is no other scenario where self.connected is False so it goes into a loop and it can't ever get closed down?
ie this line is still problematic I think. https://github.com/Pylons/waitress/pull/419/files#diff-5cb215c6142fa7daad673c77fcb7f3bc0a0630e18e3487a5ac0894e90f1b2071L95

@djay
Copy link
Author

djay commented Mar 4, 2024

@digitalresistor sorry I see you remove this also. Ok, I will test this

@digitalresistor
Copy link
Member

@djay do you have any feedback with the newly proposed changes?

@djay
Copy link
Author

djay commented Mar 22, 2024

@digitalresistor sorry for the delay on this. It was deployed earlier this week but the weekly pentest only happens once a week on a sunday so monday we should know.

@djay
Copy link
Author

djay commented Apr 3, 2024

@digitalresistor seems to be running fine in prod with what we presume are the same pentests being thrown at it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants