So a quick question, what sort of http transfers are chunking most often used for? I believe we will get poor results with the method for most types of binary data, which tend to be the larger files. In the web context these will generally have not changed at all (in which case traditional caching will help) or will have changed completely in which case the hashing is just overhead. Happy to be corrected on this point.<br>
<br>Actually while we are on this thought do we want to add the strong hash to the request headers so the upstream server can reply with use the cached version. This would allow the server side to correct for sites that don't use correct cache headers (i.e. static images with no cache information).<br>
<br>One alternative to the fail on error is to hold a copy on the server end for a short period so we can retransmit unencoded, but this is probably unacceptable overhead on the server side, especially if we can't manage to maintain a TCP session for the retry.<br>
<br>Are there any headers sent with each http chunk, we could always put our strong hash across these, assuming that chunking is defined at source and not repartitioned by caches and proxies in between.<br><br>Toby<br>