minor issue, as the memory is only created at shutdown. we just prevent the new details being queued.
chunked response is now allowed but can be disabled via _hdr=8, but it also seems that some players may have trouble even if HTTP 1.1 is supplied but these tend to send the icy-metadata header so we also use that to disable chunked. keep-alive is only enabled for 1.1 connections and clients are reset and re-added back onto a worker for subsequent processing. A couple of the responses can be 1.1. This should only really be a factor for web style requests as they tend to blast a series of requests to avoid the accept overhead.
… account for a larger block default
normally the code is 0 for the mutex lock case or it blocks, any other code should mean a significant problem that needs looking into and verifying.
…tuck thinking nothing to send
certain players like iphone issue range requests but react poorly if a 206 response code is returned. To get around this we return a 200 response code (like we did until recently but also impose a very short termination trigger. There is also a tie-in with handling requests using the HEAD method as thes are used to get information but not return content. A further issue with these short duration requests is how they react when auth is in use, which we allow to bypass the auth trigger but leave the flag disabled allowing the backend file and stream routines to detect and drop. The are a few changes here just because the discon time is made as a union to also allow for a end byte position.
This makes the default queue block size slightly larger (setting settable with qblock-size) but allows the writer routines to have their own limit which is about 1400 by default. This reduces the internal admin overhead on large queues but still imposes a packet size limiter send to the listeners. The only external difference is that instead of packets of N frames going to the listener, you will get something in the order of 1400 (near common MTU). The setting that is new is max-send-size and is only required if you want a different write size to 1400, but you will only ever really get up to the qblock-size, which is another reason it was increased (currently about 2900). There were a couple of other issues identified which were fixed but are strickly speaking separate from the block sizing. One is the period update of the full metadata, it did not actually break the spec but was caused by a state setting if an EAGAIN was returned at a specific point. I doubt that anyone noticed as it is rare and would not be seen generally. The other aspect was the requeue was being triggerred in cases it should not of. I suspect this could impose some performance penalty on certain setups and incorrectly trigger the listener to skip data.
…ustment To allow for site specific concerns, allow for a target for block sizing on non-ogg streams. This acts as a minimum as the blocks contain whole frames so we may want say 500 byte blocks but some mp3 frames can be 1k.
…matic queue block size adjustment With dealing with very high bitrate streams, 1 mbit+, certain internal limits prevent sufficient bitrate from being maintained. Most typically because information about the rate provided is not available to certain areas without placing searches in hot paths. The most obviously trigger here is the incoming bitrate stat which is mainatined in the source structure but is not available to the backend writers. use a per-client adjustment value, passed from the source to the listener to help determine a more appropriate retry time based on the incoming bitrate. Provide an extra reschule hook in the worker. We still have the normal schedule timers but now also provide an optional flag that the listener can use to refer to a source flag, this means that the source, when it has processed something, can implicitly trigger any waiting (non-lagging) listeners to process without them waiting until some scheduled point. This is currently used for listeners who are at the front of the queue so are just guessing when to reschedule. Because there are limiters in the send routines to prevent excessive amounts being sent or recevied with one client, that together with anti busy loop triggers can limit the max throughput of a client. While per-client triggers are scalable the second element is the block size read/sent. We now scale blocks based on the incoming bitrate to at most ~9k instead of the 1.5k that is fixed or set by the qblock size xml setting.
It only shows on later glibc and really means a multiple unlock which is undefined per spec. the free tree routine which is clearing out any virtual source stats is not taking a lock on each node before relaing, so add a wrapper for each case, no clients are present at the point this is run.
seems that some installs of mime.types miss the audio/aac type, so by doing a small rearrangment, we load up the internal default types and then override them with the ones defined in the mime.type file. This is consistent with windows as well.
double unlock causing thread routines to abort process