Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to configure squid server to operate with bitz-server properly. #9

Open
iradization opened this issue Dec 24, 2018 · 16 comments
Open
Labels

Comments

@iradization
Copy link

iradization commented Dec 24, 2018

Hi,

I'd like to configure bitz-server with squid proxy.
squid should have callback for response method like in the following example :

icap_service service_resp respmod_precache bypass=0 icap://192.168.50.1:1344/<method_name>

Anybody knows what to replace with the <method_name> ?

thanks

@gr3m1in
Copy link

gr3m1in commented Dec 24, 2018 via email

@iradization
Copy link
Author

iradization commented Dec 24, 2018

@gr3m1in, Just to make sure I understand, according to the configuration file (bitz-server.conf) the modifier named is modpy, right ?

However, it seems like it doesn't matter what I write in the squid configuration file (<method_name>), I keep getting the response handler from modpy.

@gr3m1in
Copy link

gr3m1in commented Dec 24, 2018

Not really. The modpy module provides python support to implement actual content modification code. And I'd guess that you are getting response from modpy saying something about invalid method.
Modifications may be applied to the requests from client to server (like request uri or request headers changes) and to responses from server to client. Both of them are handled by mod_py.so module, using corresponding class (ReqmodRequestHandler or RespmodRequestHandler). Those both definitions have their handler names, REQMOD and RESPMOD by default. See line 24 ad 35 at default config https://github.com/uditha-atukorala/bitz-server/blob/master/conf/bitz-server.conf.in
So squid's configuration should contain something like this:
icap_service whatever_params_here icap://127.0.0.1:1344/REQMOD
icap_service whatever_params_here icap://127.0.0.1:1344/RESPMOD

@iradization
Copy link
Author

iradization commented Dec 25, 2018

@gr3m1in, so after I set squid configuration according to your guidelines, it seems to be working and I could use the passthrough module modpy.py which forward the request exactly as it is :

resp_payload = {};
resp_payload['req_header'] = req_payload['req_header'];
resp_payload['req_body'] = req_payload['req_body'];
resp_payload['res_header'] = req_payload['res_header'];
resp_payload['res_body'] = req_payload['res_body'];
resp_payload['ieof'] = True;
response = bitz.get_response( 200, resp_payload );

However, it seems like the squid server treat this response as erroneous and the reason was that we send the request headers in the response, as specified in function utils::send_response

if ( response->payload().req_header.size() > 0 ) { send_data( response->payload().req_header, socket ); }

After I commented out the send_data line above, it works and I could finally see the file received on the client side.

Now I see that large files (>10kB) are not returned as whole, and I get the following message from curl on the client side "curl: (18) transfer closed with XXX bytes remaining to read"

while opening gdb on the bitz worker process, I could definitely see that it's properly send the entire file in method send_chunked, but the connection somehow stopped in the middle...

I should say that it's not likely a network issue since without icap everything works just fine .. perhaps there's a timeout that I need to re-configure somewhere?

BTW, I also tried to run with the --debug option and saw no sign of connection close.

thanks !

@gr3m1in
Copy link

gr3m1in commented Dec 25, 2018

I'm just guessing right now, but it looks to me as incorrect behaviour around ICAP "preview" feature...
http://www.squid-cache.org/Doc/config/icap_preview_enable/
I have no idea is this feature supported by bitz-server (and supported correctly?), also there may be some squid specific configuration magic...
Try turning off the preview feature to figure it out is exactly this feature causes the problem.
However keep in mind that completely turning off the preview under actual load cases is not recommended, because you will loose the possibility of partial check of incoming data to decide should the whole huge data be passed through the icap service.

@uatuko
Copy link
Owner

uatuko commented Dec 25, 2018

There is a timeout which you can disable conf/bitz-server.conf.in#L18.

I see the issue with headers, modpy and echo modules don't check ICAP request method and returns all headers (which doesn't work for RESPMOD). I'll raise an issue to get it fixed.

@uatuko
Copy link
Owner

uatuko commented Dec 25, 2018

I've released v2.0.2 which should fix the header issue (#10). @iradization, could you please give it a try?

@iradization
Copy link
Author

iradization commented Dec 26, 2018

Hi @uditha-atukorala ,

I've tried to run with your latest fix. however, I get erroneous packet in my squid server. looking at the logs from squid, it seems like squid get the packet without res-hdr only res-body.

here are the logs from squid :

good packet (for comparison):

1545752068.321 10 localhost ICAP_MOD/200 431 RESPMOD icap://192.168.50.103:1344/RESPMOD - -/192.168.50.103 Connection:%20close%0D%0ADate:%20Tue%20Dec%2025%2007:34:28%202018%20PST%0D%0AISTag:%20BITZ-1545752068-149833%0D%0AServer:%20bitz-server%202.0.0%0D%0AEncapsulated:%20res-body=354,%20res-hdr=128%0D%0A 247 -

bad packet:

1545818659.461 34883 localhost ICAP_ERR/200 417 RESPMOD icap://192.168.50.103:1344/RESPMOD - -/192.168.50.103 Connection:%20close%0D%0ADate:%20Wed%20Dec%2026%2002:03:44%202018%20PST%0D%0AISTag:%20BITZ-1545818624-54933%0D%0AServer:%20bitz-server%202.0.0%0D%0AEncapsulated:%20res-body=226%0D%0A - -

trying to debug it a little further, I've noticed that in method icap::util::send_response, the response header lacks the "res-hdr" part (see below) :

(gdb) p *response._header
$64 = {icap::Header = {_vptr.Header = 0x7f41f0f6dce8 <vtable for icap::ResponseHeader+16>, _headers = std::map with 4 elements = {["Connection"] = "close", ["Date"] = "Wed Dec 26 03:22:14 2018 PST",
["ISTag"] = "BITZ-1545823334-15823", ["Server"] = "bitz-server 2.0.0"}, _encapsulated = std::map with 6 elements = {["null-body"] = -1, ["opt-body"] = -1, ["req-body"] = -1, ["req-hdr"] = -1,
["res-body"] = 226, ["res-hdr"] = 0}}, _response = {protocol = "ICAP/1.0", status = icap::ResponseHeader::OK}}

still trying to pinpoint the problem.

@iradization
Copy link
Author

iradization commented Dec 26, 2018

@uditha-atukorala, I think that the problem might be in the following method :
void Header::update_encapsulated( const payload_t &payload )

looks like data_length variable is not initialed properly. For example, in case we have only res_body and res_header, you can see that _encapsulated["res-hdr"] is set with 0 instead of the real section length.

Let me know if what do you think.

thanks !

@uatuko
Copy link
Owner

uatuko commented Dec 27, 2018

@iradization, thanks for looking into this.

looks like data_length variable is not initialed properly. For example, in case we have only res_body and res_header, you can see that _encapsulated["res-hdr"] is set with 0 instead of the real section length.

res-hdr is not the length of header but the start point, so 0 is correct if it is the first section.

However, I did a quick few tests (using test/icap-client.py) it seems like the encapsulated header isn't getting set correctly (e.g. Got Encapsulated: res-body=159 instead of Encapsulated: res-hdr=0, res-body=159). I'll take a look.

@uatuko
Copy link
Owner

uatuko commented Dec 27, 2018

Released v2.0.3, should fix the encapsulated header issue (#12).

@iradization
Copy link
Author

iradization commented Dec 30, 2018

@uditha-atukorala, I'm happy to report that indeed after the headers parts were sorted correctly, the problem resolved 👍

However, I tried to set the timeout as you informed me from the configuration file, and I still the sudden stopped stream when trying to download large files, of size bigger than ICAP_BUFFER_SIZE

This problem seem to be connected to the way send_line works when sending the file in chunks from method send_chunked method in lib/icap.utils.cpp.

I suspect that the suffix \r\n, should be added to both chunk size as well as chunk data.

when I switched :

if (! send_line( dectohex( chunked_data.size() ), socket ) ) {
    return false;
}

if (! send_data( chunked_data, socket ) ) {
      return false;
}

with :

std::string toWrite = dectohex( chunked_data.size());
toWrite += "\r\n";
toWrite += chunked_data;
toWrite += "\r\n";
socket->write( toWrite.c_str(), toWrite.size() );

it seems to do the job.

@uatuko
Copy link
Owner

uatuko commented Dec 30, 2018

@iradization, you are right. There needs to be a \r\n at the end of each chunk.

e.g.

1e\r\n
I am posting this information.\r\n
0\r\n
\r\n

However your change will need to make sure end of chuck is sent correctly as well (lib/icap/util.cpp#L383). Think it would be cleaner to keep the send_line, append \r\n to send_data (or have an extra send_data for \r\n) and drop the first \r\n from end of chunk.

I'm happy to accept a pull requests if you want to contribute your changes.

@gr3m1in
Copy link

gr3m1in commented Apr 23, 2019

I'd also propose the following fix for the very last chunk with size less than ICAP_BUFFER_SIZE since the division here acts as floor() and the last such chunk is not being sent to client.

--- a/lib/icap/util.cpp 2019-04-23 18:11:53.756411872 +0200
+++ b/lib/icap/util.cpp 2019-04-23 18:35:11.116083749 +0200
@@ -346,7 +346,7 @@

                        // calculate the number of chunks we need
                        if ( data.size() > ICAP_BUFFER_SIZE ) {
-                               chunks = ( data.size() / ICAP_BUFFER_SIZE );
+                               chunks = ( data.size() / ICAP_BUFFER_SIZE ) + 1;
                        }

                        try {

@uatuko
Copy link
Owner

uatuko commented Apr 23, 2019

@gr3m1in, thanks for proposed fix. I'm happy to accept a pull request if you are up for it.

(I'd also initialise chunks with 1 instead of 0 for completeness)

@gr3m1in
Copy link

gr3m1in commented Apr 24, 2019

@uditha-atukorala, of course I'll make a couple of pull requests with my changes later, once I manage to take my setup up and running, because some of the fixes may change in the meantime.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants