[Server-devel] Apache proxy CRCsync & mozilla gsoc project?

Martin Langhoff martin.langhoff at gmail.com
Wed Apr 1 05:10:37 EDT 2009


On Wed, Apr 1, 2009 at 8:29 AM, Rusty Russell <rusty at rustcorp.com.au> wrote:
> Yes, we need to chunk, because we can't hand the data on to the client until
> we've verified it, at least in a serious implementation.

Hmmm. If I understand you right, the concern is that the rolling hash
matches in locations that aren't a true match.

Can we do anything that is still efficient and retains the ability to
stream? Maybe the client can send 2 hashes in the header, same block
size but seeded differently?

Or is the problem with the delta blocks we send?... (doesn't seem
likely to prevent a streaming implementation, but maybe I'm missing
something)

> Since we're going to error out on the fail case, I'll switch the code to do
> 64-bit checksums (not right now, but soon: what we have is good enough for
> testing).

Does 2 hashes make the error condition so unlikely that we can assume
it won't happen normally? Also - delivery of HTTP payloads is not
guaranteed. As Tridge said, non-cacheable GETs may be non-idempotent,
but they sometimes fail to complete for any of many reasons, and the
user has a big fat Refresh button right there in the webbrowser.

IOWs, a blind, unchecked "delete-last-user" action in a webapp is a
bug in the webapp. It is ok to fail, as long as the retry will use a
different seed...

cheers,



m
ps: cc'd the http-crcsync list, which is more appropriate...
-- 
 martin.langhoff at gmail.com
 martin at laptop.org -- School Server Architect
 - ask interesting questions
 - don't get distracted with shiny stuff  - working code first
 - http://wiki.laptop.org/go/User:Martinlanghoff


More information about the Server-devel mailing list