[Http-crcsync] Apache proxy CRCsync & mozilla gsoc project?

WULMS Alexander Alex.WULMS at swift.com
Thu Apr 2 03:36:49 EDT 2009


Chunking (when using chunked-encoding) can change at each intermediate http proxy. It is basically just a way to transfer the
http-body between a client and a server. Keep in mind that a proxy acts both as a client (to the upstream server) and as a server
(to the downstream client). So a proxy could for example cache the entire http-body from the upstream server and then send it to
it's own down-stream client in one shot. Or it could use a smaller internal buffer then it's upstream server and hence split the
http-body into smaller chunks when sending it to it's client.

 

 

--------
Alex WULMS
Lead Developer/Systems Engineer

Tel: +32 2 655 3931

Information Systems - SWIFT.COM Development
S.W.I.F.T. SCRL

From: http-crcsync-bounces at lists.laptop.org [mailto:http-crcsync-bounces at lists.laptop.org] On Behalf Of Toby Collett
Sent: Wednesday, April 01, 2009 8:18 PM
To: WULMS Alexander
Cc: tridge at samba.org; jg at freedesktop.org; http-crcsync at lists.laptop.org; angxia Huang; XS Devel
Subject: Re: [Http-crcsync] Apache proxy CRCsync & mozilla gsoc project?

 

So a quick question, what sort of http transfers are chunking most often used for? I believe we will get poor results with the
method for most types of binary data, which tend to be the larger files. In the web context these will generally have not changed at
all (in which case traditional caching will help) or will have changed completely in which case the hashing is just overhead. Happy
to be corrected on this point.

Actually while we are on this thought do we want to add the strong hash to the request headers so the upstream server can reply with
use the cached version. This would allow the server side to correct for sites that don't use correct cache headers (i.e. static
images with no cache information).

One alternative to the fail on error is to hold a copy on the server end for a short period so we can retransmit unencoded, but this
is probably unacceptable overhead on the server side, especially if we can't manage to maintain a TCP session for the retry.

Are there any headers sent with each http chunk, we could always put our strong hash across these, assuming that chunking is defined
at source and not repartitioned by caches and proxies in between.

Toby

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.laptop.org/pipermail/http-crcsync/attachments/20090402/315c3120/attachment-0001.htm 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 5020 bytes
Desc: not available
Url : http://lists.laptop.org/pipermail/http-crcsync/attachments/20090402/315c3120/attachment-0001.bin 


More information about the Http-crcsync mailing list