|
|
Created:
9 years, 6 months ago by enal Modified:
9 years, 6 months ago Reviewers:
willchan no longer on Chromium, jar (doing other things), Alpha Left Google, enal1, commit-bot: I haz the power, rvargas (doing something else), ananta, scherkus (not reviewing) CC:
chromium-reviews, hclam+watch_chromium.org, cbentzel+watch_chromium.org, sjl, ddorwin+watch_chromium.org, fischman+watch_chromium.org, Paweł Hajdan Jr., acolwell GONE FROM CHROMIUM, annacc, darin-cc_chromium.org, ajwong+watch_chromium.org, vrk (LEFT CHROMIUM), scherkus (not reviewing) Base URL:
http://src.chromium.org/svn/trunk/src/ Visibility:
Public. |
DescriptionNot allow compression when requesting multimedia
because ffmpeg does not expect compressed files.
(It does not make sense to compress audio/video anyways...)
http://codereview.chromium.org/7044092
BUG=47381
TEST=http://www.deafmac.org/html5/grinchsample.mp4 (Please clear browser cache, otherwise it can contain original GZIP-ed file requested by old browser)
Committed: http://src.chromium.org/viewvc/chrome?view=rev&revision=89532
Committed: http://src.chromium.org/viewvc/chrome?view=rev&revision=89858
Patch Set 1 #
Total comments: 4
Patch Set 2 : '' #Patch Set 3 : '' #Patch Set 4 : '' #Patch Set 5 : '' #Patch Set 6 : '' #
Total comments: 1
Patch Set 7 : '' #Patch Set 8 : '' #Patch Set 9 : '' #
Messages
Total messages: 37 (0 generated)
+jar for reviewing net/*
http://codereview.chromium.org/7044092/diff/1/webkit/glue/media/buffered_reso... File webkit/glue/media/buffered_resource_loader_unittest.cc (right): http://codereview.chromium.org/7044092/diff/1/webkit/glue/media/buffered_reso... webkit/glue/media/buffered_resource_loader_unittest.cc:75: return value == WebString::fromUTF8("identity;q=1, *;q=0"); Check the exact string in the implementation is too strict. what about match *;q=0 and identity; separately?
Can you adjust the comment and TODO per details below? As a side comment, I like the approach taken. http://codereview.chromium.org/7044092/diff/1/net/url_request/url_request_htt... File net/url_request/url_request_http_job.cc (right): http://codereview.chromium.org/7044092/diff/1/net/url_request/url_request_htt... net/url_request/url_request_http_job.cc:405: // is probably an img or such (and SDCH encoding is not likely). Do you think this Todo() is satisfied? Perhaps it is not completely satisfied, and so you should move it?? Perhaps you can improve your comment (line 401) to indicate that the Accept-encoding is provided IF the content is known to have restrictions on potential encoding, such as multi-media. You could then add a TODO() around line 401 to suggest that jpeg files etc. should set up a request header if possible.
Addressed everything. http://codereview.chromium.org/7044092/diff/1/net/url_request/url_request_htt... File net/url_request/url_request_http_job.cc (right): http://codereview.chromium.org/7044092/diff/1/net/url_request/url_request_htt... net/url_request/url_request_http_job.cc:405: // is probably an img or such (and SDCH encoding is not likely). On 2011/06/09 21:24:37, jar wrote: > Do you think this Todo() is satisfied? Perhaps it is not completely satisfied, > and so you should move it?? > > Perhaps you can improve your comment (line 401) to indicate that > > the Accept-encoding is provided IF the content is known to have restrictions on > potential encoding, such as multi-media. > > > You could then add a TODO() around line 401 to suggest that jpeg files etc. > should set up a request header if possible. Done. http://codereview.chromium.org/7044092/diff/1/webkit/glue/media/buffered_reso... File webkit/glue/media/buffered_resource_loader_unittest.cc (right): http://codereview.chromium.org/7044092/diff/1/webkit/glue/media/buffered_reso... webkit/glue/media/buffered_resource_loader_unittest.cc:75: return value == WebString::fromUTF8("identity;q=1, *;q=0"); On 2011/06/09 20:30:25, Alpha wrote: > Check the exact string in the implementation is too strict. what about match > *;q=0 and identity; separately? Done.
LGTM
LGTM.
cool! you'll also want to update webkit/glue/media/simple_data_source
I'm also open to suggestions as to how to de-duplicate code, etc..
Added to simple data source. Not sure how to reuse the code -- we probably can create helper function, but there is no obvious place to put it, and it's probably not worth creating new file for 3 lines saving. For now just added those 3 lines...
LGTM
Presubmit check for 7044092-13001 failed and returned exit status 1. Running presubmit commit checks ... ** Presubmit ERRORS ** Missing LGTM from an OWNER for: net/url_request/url_request_http_job.cc Was the presubmit check useful? Please send feedback & hate mail to maruel@chromium.org!
Added willchan as reviewer, LGTM from jar is not enough... :-(
I'm a bit confused. Why is this an ffmpeg issue? Isn't the network stack decompressing the content before it makes it over to ffmpeg? Is it a problem if an XHR sets the Accept-Encoding header? Is there anything malicious that could happen? I'm just wondering if this is a concern, since before we'd overwrite anything that an XHR tried to set for it. Is it a bug if an XHR sets Accept-Encoding to include sdch? It looks like we won't correctly advertise dictionaries. What happens server side if we do this? This change is probably safe, but I just wanted to ask these questions first to make sure. On Tue, Jun 14, 2011 at 9:03 PM, <enal@chromium.org> wrote: > Added willchan as reviewer, LGTM from jar is not enough... > > :-( > > http://codereview.chromium.org/7044092/ >
Obviously, network stack is not decompressing the content. If you can point me where it is supposed to be done I can try debugging to find out why it is not done. Compressing audio/video usually does not make sense, anyways... On Tue, Jun 14, 2011 at 11:37 AM, William Chan (陈智昌) <willchan@chromium.org> wrote: > I'm a bit confused. Why is this an ffmpeg issue? Isn't the network > stack decompressing the content before it makes it over to ffmpeg? > > Is it a problem if an XHR sets the Accept-Encoding header? Is there > anything malicious that could happen? I'm just wondering if this is a > concern, since before we'd overwrite anything that an XHR tried to set > for it. Is it a bug if an XHR sets Accept-Encoding to include sdch? It > looks like we won't correctly advertise dictionaries. What happens > server side if we do this? > > This change is probably safe, but I just wanted to ask these questions > first to make sure. > > On Tue, Jun 14, 2011 at 9:03 PM, <enal@chromium.org> wrote: >> Added willchan as reviewer, LGTM from jar is not enough... >> >> :-( >> >> http://codereview.chromium.org/7044092/ >> >
On Tue, Jun 14, 2011 at 9:49 PM, Eugene Nalimov <enal@google.com> wrote: > Obviously, network stack is not decompressing the content. If you can > point me where it is supposed to be done I can try debugging to find > out why it is not done. Sorry, I'm not sure why it's obvious that this is the case. I probably don't know something that you guys do. net/url_request/url_request_job.cc has the relevant code. See URLRequestJob::NotifyHeadersComplete() calls SetupFilter() on success, which is overridden in URLRequestHttpJob::SetupFilter(). Then URLRequestJob::Read() will call ReadFilteredData() if a filter is present. The relevant filter in question is the GZipFilter. > > Compressing audio/video usually does not make sense, anyways... I agree with this. But it seems like something that the server should control anyway, right? Accept-Encoding just says that the client supports whatever encoding. If we're working around server bugs, that's fine, but we should document it. > > On Tue, Jun 14, 2011 at 11:37 AM, William Chan (陈智昌) > <willchan@chromium.org> wrote: >> I'm a bit confused. Why is this an ffmpeg issue? Isn't the network >> stack decompressing the content before it makes it over to ffmpeg? >> >> Is it a problem if an XHR sets the Accept-Encoding header? Is there >> anything malicious that could happen? I'm just wondering if this is a >> concern, since before we'd overwrite anything that an XHR tried to set >> for it. Is it a bug if an XHR sets Accept-Encoding to include sdch? It >> looks like we won't correctly advertise dictionaries. What happens >> server side if we do this? >> >> This change is probably safe, but I just wanted to ask these questions >> first to make sure. >> >> On Tue, Jun 14, 2011 at 9:03 PM, <enal@chromium.org> wrote: >>> Added willchan as reviewer, LGTM from jar is not enough... >>> >>> :-( >>> >>> http://codereview.chromium.org/7044092/ >>> >> >
I found why compressed content does not work. When serving from cache, we are getting "Accept-Ranges: bytes" even if original response did not contain it. After that, we are sending ranges requests and getting back data that zlib cannot handle. (There was bug on our side as well -- fixed). In any case, (1) compressed media content doesn't make sense, and (2) when auto-recognizing format we can go to the end of the file -- and for non-chunked compressed file we have to read all the data first...
On 2011/06/15 02:51:25, enal wrote: > I found why compressed content does not work. > > When serving from cache, we are getting "Accept-Ranges: bytes" even if original > response did not contain it. After that, we are sending ranges requests and > getting back data that zlib cannot handle. (There was bug on our side as well -- > fixed). I don't know why the cache would add a header that didn't previously exist in the response. But I don't know much about caching. I've added Ricardo in case he wants to enlighten us (I'd love to know!). And what do you mean by zlib not being able to handle this? What's happening? > > In any case, (1) compressed media content doesn't make sense, and (2) when > auto-recognizing format we can go to the end of the file -- and for non-chunked > compressed file we have to read all the data first... I agree in general with (1). It's not clear to me that clients should remove Accept-Encoding because of it, since I think the server should know whether or not compression makes sense. If this is a workaround because servers are sending gzip'd media that was already compressed, then I'm fine with that and we should add a comment to the effect (have you added one?). I agree (2) is unfortunate too. I think it's unwise for servers to compress this content. Note that Firefox discussed this previously and decided it was the server's responsibility too (https://bugzilla.mozilla.org/show_bug.cgi?id=489071#c8). In summary, I think that we should fix whatever we're doing that's causing us to get responses that we can't handle. And as for changing Accept-Encoding to remove gzip,deflate, I am not thrilled about it, especially given Firefox's stance (which I think is correct), but if you guys disagree, I am open to approving this with a comment in the code that clarifies that we're doing this to workaround servers which are misconfigured.
On Wed, Jun 15, 2011 at 10:02 AM, <willchan@chromium.org> wrote: > On 2011/06/15 02:51:25, enal wrote: >> >> I found why compressed content does not work. > >> When serving from cache, we are getting "Accept-Ranges: bytes" even if > > original >> >> response did not contain it. After that, we are sending ranges requests >> and >> getting back data that zlib cannot handle. (There was bug on our side as >> well > > -- >> >> fixed). > > I don't know why the cache would add a header that didn't previously exist > in > the response. But I don't know much about caching. I've added Ricardo in > case he > wants to enlighten us (I'd love to know!). And what do you mean by zlib not > being able to handle this? What's happening? We went through all the stream when recognizing the format, after that we have to return to the beginning. We are sending new request, at that time data is in the cache. We are getting header saying that server supports ranges (it was not in the original response when data was not cached), and sending range request. We are getting data, and zlib library chocks on it because it cannot handle gzip header. My guess is that we are getting data starting from specified byte in the stream as if it was not compressed, then feeding that data into filter to decompress... BTW, that is not the only case cached data is wrong. If your accept-encoding says you don't want compressed data, but you have compressed data in the cache, you will get it back compressed... (of course this time headers would be correct) >> In any case, (1) compressed media content doesn't make sense, and (2) when >> auto-recognizing format we can go to the end of the file -- and for > > non-chunked >> >> compressed file we have to read all the data first... > > I agree in general with (1). It's not clear to me that clients should remove > Accept-Encoding because of it, since I think the server should know whether > or > not compression makes sense. If this is a workaround because servers are > sending > gzip'd media that was already compressed, then I'm fine with that and we > should > add a comment to the effect (have you added one?). Will add the comment shortly. Original bug explains why it may be a good idea to handle such data -- somebody set up web server, but did not say it not to compress multimedia, after that Chrome cannot display the content. It is very hard to figure out why, especially because Safari and Firefox display it correctly. I agree that server may be misconfigured, but working around is not very hard. > I agree (2) is unfortunate too. I think it's unwise for servers to compress > this > content. > > Note that Firefox discussed this previously and decided it was the server's > responsibility too (https://bugzilla.mozilla.org/show_bug.cgi?id=489071#c8). > > In summary, I think that we should fix whatever we're doing that's causing > us to > get responses that we can't handle. And as for changing Accept-Encoding to > remove gzip,deflate, I am not thrilled about it, especially given Firefox's > stance (which I think is correct), but if you guys disagree, I am open to > approving this with a comment in the code that clarifies that we're doing > this > to workaround servers which are misconfigured. > > http://codereview.chromium.org/7044092/ >
Added comment to URLRequestHttpJob::AddExtraHeaders() explaining that we are working around misconfigured servers.
As a note on my philosophy for accept-encoding: We are already modulating "accept-encoding" (varying it) based upon what we have learned about the channel for SDCH encoding. We remove the "sdch," from the list when we have evidence that it is not working. The common cause is that a poorly written proxy is damaging a stream using such an encoding, but we also degrade (removes the offer to accept) when a server is misconfigured enough to make such a protocol problematic (i.e., it was plausible that an intermediary has damaged a stream) In this CL, it sounds like we have some advance evidence that an encoding would be a "bad thing to do," and we have other evidence that sometimes (somehow) the encoding is used. As a result, IMO, pro-active removal of the offer to "accept-encoding" is a wise thing to do, with no apparent down-side, and plenty of apparent upside. It is also the case that some proxies remove (or change) accept-encoding settings, so servers have to be able to handle such. The most(?) common proxy change is to change all the value elements of accept-encoding into "X" characters! Silly proxies use this substitution algorithm, as it has no impact on header size etc., and "encourages" the server to not compress <sigh>. This is seen commonly in local AV proxies <double sigh for impact of AV stuff>. As a result, servers tend to be very able to deal with restricted accept-encoding values. That said, it would be nice to identify any bug we can in *our* code that needs to be corrected, but it would not surprise me to hear that for some channels (or servers), that this change is the cleanest solution. I don't really care of other browsers (re: FF or IE) try to put the burden on the server (or the channel), as all I want is to deliver clean content to the user. This change seems completely standard compliant, so it seems to be little more than good defensive coding (instigated by real-world problems). On 2011/06/15 17:02:00, willchan wrote: > On 2011/06/15 02:51:25, enal wrote: > > I found why compressed content does not work. > > > > When serving from cache, we are getting "Accept-Ranges: bytes" even if > original > > response did not contain it. After that, we are sending ranges requests and > > getting back data that zlib cannot handle. (There was bug on our side as well > -- > > fixed). > > I don't know why the cache would add a header that didn't previously exist in > the response. But I don't know much about caching. I've added Ricardo in case he > wants to enlighten us (I'd love to know!). And what do you mean by zlib not > being able to handle this? What's happening? > > > > > In any case, (1) compressed media content doesn't make sense, and (2) when > > auto-recognizing format we can go to the end of the file -- and for > non-chunked > > compressed file we have to read all the data first... > > I agree in general with (1). It's not clear to me that clients should remove > Accept-Encoding because of it, since I think the server should know whether or > not compression makes sense. If this is a workaround because servers are sending > gzip'd media that was already compressed, then I'm fine with that and we should > add a comment to the effect (have you added one?). > > I agree (2) is unfortunate too. I think it's unwise for servers to compress this > content. > > Note that Firefox discussed this previously and decided it was the server's > responsibility too (https://bugzilla.mozilla.org/show_bug.cgi?id=489071#c8). > > In summary, I think that we should fix whatever we're doing that's causing us to > get responses that we can't handle. And as for changing Accept-Encoding to > remove gzip,deflate, I am not thrilled about it, especially given Firefox's > stance (which I think is correct), but if you guys disagree, I am open to > approving this with a comment in the code that clarifies that we're doing this > to workaround servers which are misconfigured.
LGTM. I'm sorry it took you long to debug this, I'm pretty sure I mentioned it in the past. The whole issue is that we want to use sparse caching for media resources, and we use byte ranges to do that. That's not a problem by itself, even if the content is compressed (as in the cache knows what range to ask for, and the server plays along without issues). The problem arises from the fact that we decompress the traffic transparently above the cache. If the client (the media player) issues a regular request, we'll reconstruct the response from the cache/net and it will be decompressed without issues. However, if the client issues a byte range request (as we do) we have a mismatch between what the client thinks it is requesting vs what it actually means (the client has no visibility into what the actual byte range is given that the decompression is transparent). And, of course, the filter (gzip, for instance) usually have no idea that it is dealing with a range (it could), so it doesn't know how to decompress that ('cause it was not designed to handle byte ranges). So the whole point is that if the client uses byte ranges, it has to make sure that the content is not encoded. http://codereview.chromium.org/7044092/diff/21002/net/url_request/url_request... File net/url_request/url_request_http_job.cc (right): http://codereview.chromium.org/7044092/diff/21002/net/url_request/url_request... net/url_request/url_request_http_job.cc:404: // For details see bug 47381 -- customer misconfigured server so it would Nit: I would just mention the bug number and leave the rest of the comment out (after this).
Thanks for the explanation Ricardo! It sounds like this is indeed a bug to send Accept-Encoding then if we're using byte ranges for sparse caching. LGTM2. On Thu, Jun 16, 2011 at 4:39 AM, <rvargas@chromium.org> wrote: > LGTM. > > I'm sorry it took you long to debug this, I'm pretty sure I mentioned it in > the > past. > > The whole issue is that we want to use sparse caching for media resources, > and > we use byte ranges to do that. > > That's not a problem by itself, even if the content is compressed (as in the > cache knows what range to ask for, and the server plays along without > issues). > The problem arises from the fact that we decompress the traffic > transparently > above the cache. > > If the client (the media player) issues a regular request, we'll reconstruct > the > response from the cache/net and it will be decompressed without issues. > However, > if the client issues a byte range request (as we do) we have a mismatch > between > what the client thinks it is requesting vs what it actually means (the > client > has no visibility into what the actual byte range is given that the > decompression is transparent). > > And, of course, the filter (gzip, for instance) usually have no idea that it > is > dealing with a range (it could), so it doesn't know how to decompress that > ('cause it was not designed to handle byte ranges). > > So the whole point is that if the client uses byte ranges, it has to make > sure > that the content is not encoded. > > > http://codereview.chromium.org/7044092/diff/21002/net/url_request/url_request... > File net/url_request/url_request_http_job.cc (right): > > http://codereview.chromium.org/7044092/diff/21002/net/url_request/url_request... > net/url_request/url_request_http_job.cc:404: // For details see bug > 47381 -- customer misconfigured server so it would > Nit: I would just mention the bug number and leave the rest of the > comment out (after this). > > http://codereview.chromium.org/7044092/ >
Thank you for the detailed explanation. Stupid question: is it possible for cache not to send "Accept-Ranges: bytes" to the client if content is compressed? (I.e. "Content-encoding" contains "gzip" or "sdch"). Byte range requests would not work for such content, anyways. Thanks, Eugene On Wed, Jun 15, 2011 at 6:39 PM, <rvargas@chromium.org> wrote: > LGTM. > > I'm sorry it took you long to debug this, I'm pretty sure I mentioned it in > the > past. > > The whole issue is that we want to use sparse caching for media resources, > and > we use byte ranges to do that. > > That's not a problem by itself, even if the content is compressed (as in the > cache knows what range to ask for, and the server plays along without > issues). > The problem arises from the fact that we decompress the traffic > transparently > above the cache. > > If the client (the media player) issues a regular request, we'll reconstruct > the > response from the cache/net and it will be decompressed without issues. > However, > if the client issues a byte range request (as we do) we have a mismatch > between > what the client thinks it is requesting vs what it actually means (the > client > has no visibility into what the actual byte range is given that the > decompression is transparent). > > And, of course, the filter (gzip, for instance) usually have no idea that it > is > dealing with a range (it could), so it doesn't know how to decompress that > ('cause it was not designed to handle byte ranges). > > So the whole point is that if the client uses byte ranges, it has to make > sure > that the content is not encoded. > > > http://codereview.chromium.org/7044092/diff/21002/net/url_request/url_request... > File net/url_request/url_request_http_job.cc (right): > > http://codereview.chromium.org/7044092/diff/21002/net/url_request/url_request... > net/url_request/url_request_http_job.cc:404: // For details see bug > 47381 -- customer misconfigured server so it would > Nit: I would just mention the bug number and leave the rest of the > comment out (after this). > > http://codereview.chromium.org/7044092/ >
On 2011/06/16 16:13:16, enal1 wrote: > Thank you for the detailed explanation. > > Stupid question: is it possible for cache not to send "Accept-Ranges: > bytes" to the client if content is compressed? (I.e. > "Content-encoding" contains "gzip" or "sdch"). Byte range requests > would not work for such content, anyways. The cache doesn't modify the headers provided by the server... well, it actually modifies the values of some headers, but in general we don't remove or add headers to a response. The problem is that we don't know if the client will issue a byte range request or not: we may have a resource stored as a regular, not-sparse entry, and still return a byte range if one is requested... or we may have a sparse entry (and that means that we rely on asking byte ranges to the server) and return the whole resource if the client doesn't ask for a byte range. Also, the cache is not really aware of what filters are implemented above it, or what are their capabilities. Furthermore, even with the cache out of the picture, if the client issues a multipart range request, I believe our current gzip filter will mess-up everything if the content is compressed. ummmm... we don't even need a multipart request; a single range will confuse it. BTW, as far as I know the media player doesn't wait to see if the server advertises byte range support before starting issuing byte range requests. > > Thanks, > Eugene > > On Wed, Jun 15, 2011 at 6:39 PM, <mailto:rvargas@chromium.org> wrote: > > LGTM. > > > > I'm sorry it took you long to debug this, I'm pretty sure I mentioned it in > > the > > past. > > > > The whole issue is that we want to use sparse caching for media resources, > > and > > we use byte ranges to do that. > > > > That's not a problem by itself, even if the content is compressed (as in the > > cache knows what range to ask for, and the server plays along without > > issues). > > The problem arises from the fact that we decompress the traffic > > transparently > > above the cache. > > > > If the client (the media player) issues a regular request, we'll reconstruct > > the > > response from the cache/net and it will be decompressed without issues. > > However, > > if the client issues a byte range request (as we do) we have a mismatch > > between > > what the client thinks it is requesting vs what it actually means (the > > client > > has no visibility into what the actual byte range is given that the > > decompression is transparent). > > > > And, of course, the filter (gzip, for instance) usually have no idea that it > > is > > dealing with a range (it could), so it doesn't know how to decompress that > > ('cause it was not designed to handle byte ranges). > > > > So the whole point is that if the client uses byte ranges, it has to make > > sure > > that the content is not encoded. > > > > > > > http://codereview.chromium.org/7044092/diff/21002/net/url_request/url_request... > > File net/url_request/url_request_http_job.cc (right): > > > > > http://codereview.chromium.org/7044092/diff/21002/net/url_request/url_request... > > net/url_request/url_request_http_job.cc:404: // For details see bug > > 47381 -- customer misconfigured server so it would > > Nit: I would just mention the bug number and leave the rest of the > > comment out (after this). > > > > http://codereview.chromium.org/7044092/ > >
Does this imply than any time we do a byte-range-request, we should assume the requester had made this request relative to some decompressed data (since those are the only byte ranges known by our internal modules).... and hence the network stack should make the request "clearer" by insisting on no compression?? (i.e., by removing all the accept-encodings?). Thanks, Jim On Thu, Jun 16, 2011 at 11:14 AM, <rvargas@chromium.org> wrote: > On 2011/06/16 16:13:16, enal1 wrote: > >> Thank you for the detailed explanation. >> > > Stupid question: is it possible for cache not to send "Accept-Ranges: >> bytes" to the client if content is compressed? (I.e. >> "Content-encoding" contains "gzip" or "sdch"). Byte range requests >> would not work for such content, anyways. >> > > The cache doesn't modify the headers provided by the server... well, it > actually > modifies the values of some headers, but in general we don't remove or add > headers to a response. > > The problem is that we don't know if the client will issue a byte range > request > or not: we may have a resource stored as a regular, not-sparse entry, and > still > return a byte range if one is requested... or we may have a sparse entry > (and > that means that we rely on asking byte ranges to the server) and return the > whole resource if the client doesn't ask for a byte range. > > Also, the cache is not really aware of what filters are implemented above > it, or > what are their capabilities. > > Furthermore, even with the cache out of the picture, if the client issues a > multipart range request, I believe our current gzip filter will mess-up > everything if the content is compressed. ummmm... we don't even need a > multipart > request; a single range will confuse it. > > BTW, as far as I know the media player doesn't wait to see if the server > advertises byte range support before starting issuing byte range requests. > > > > Thanks, >> Eugene >> > > On Wed, Jun 15, 2011 at 6:39 PM, <mailto:rvargas@chromium.org> wrote: >> > LGTM. >> > >> > I'm sorry it took you long to debug this, I'm pretty sure I mentioned it >> in >> > the >> > past. >> > >> > The whole issue is that we want to use sparse caching for media >> resources, >> > and >> > we use byte ranges to do that. >> > >> > That's not a problem by itself, even if the content is compressed (as in >> the >> > cache knows what range to ask for, and the server plays along without >> > issues). >> > The problem arises from the fact that we decompress the traffic >> > transparently >> > above the cache. >> > >> > If the client (the media player) issues a regular request, we'll >> reconstruct >> > the >> > response from the cache/net and it will be decompressed without issues. >> > However, >> > if the client issues a byte range request (as we do) we have a mismatch >> > between >> > what the client thinks it is requesting vs what it actually means (the >> > client >> > has no visibility into what the actual byte range is given that the >> > decompression is transparent). >> > >> > And, of course, the filter (gzip, for instance) usually have no idea >> that it >> > is >> > dealing with a range (it could), so it doesn't know how to decompress >> that >> > ('cause it was not designed to handle byte ranges). >> > >> > So the whole point is that if the client uses byte ranges, it has to >> make >> > sure >> > that the content is not encoded. >> > >> > >> > >> > > http://codereview.chromium.**org/7044092/diff/21002/net/** > url_request/url_request_http_**job.cc<http://codereview.chromium.org/7044092/diff/21002/net/url_request/url_request_http_job.cc> > >> > File net/url_request/url_request_**http_job.cc (right): >> > >> > >> > > http://codereview.chromium.**org/7044092/diff/21002/net/** > url_request/url_request_http_**job.cc#newcode404<http://codereview.chromium.org/7044092/diff/21002/net/url_request/url_request_http_job.cc#newcode404> > >> > net/url_request/url_request_**http_job.cc:404: // For details see bug >> > 47381 -- customer misconfigured server so it would >> > Nit: I would just mention the bug number and leave the rest of the >> > comment out (after this). >> > >> > http://codereview.chromium.**org/7044092/<http://codereview.chromium.org/7044... >> > >> > > > > http://codereview.chromium.**org/7044092/<http://codereview.chromium.org/7044... >
quick sanity check question before submitting... should the media cache not send accept-gzip so each client doesn't have to do it themselves?
I believe that depends on the exact content and how client handles the data. If client is not asking for byte ranges everything should be Ok.. E.g. I don't believe compressed jpeg files would ever be a problem... On Thu, Jun 16, 2011 at 5:07 PM, <scherkus@chromium.org> wrote: > quick sanity check question before submitting... should the media cache not > send > accept-gzip so each client doesn't have to do it themselves? > > http://codereview.chromium.org/7044092/ >
ok cool do you want to update that comment nit? then I'll click the box
On 2011/06/16 20:45:33, jar wrote: > Does this imply than any time we do a byte-range-request, we should assume > the requester had made this request relative to some decompressed data > (since those are the only byte ranges known by our internal modules).... and > hence the network stack should make the request "clearer" by insisting on no > compression?? (i.e., by removing all the accept-encodings?). That's probably a safe assumption. I think it is theoretically possible for the requester to specify some content encoding that we don't understand, hoping to receive the stream without modifications, but I don't think that will work (or that we want that). So yes, it could be possible for the network stack to enforce that rule. As for doing that in the case of the media cache, currently there is nothing specific to that instance, so range requests can be issued no matter what cache we are dealing with (or if it is disabled). BTW, I just realized that this CL doesn't add a net unit test for the new behavior :(
Updated comment, added unit test that checks Accept-Encoding behavior.
LGTM -- net/ folks what say you?
LGTM, thanks for the test.
Change committed as 89532
Adding ananta@chromium.org as reviewer, disabling 2 net layer tests for chrome frame...
On 2011/06/20 23:59:16, enal1 wrote: > Adding mailto:ananta@chromium.org as reviewer, disabling 2 net layer tests for chrome > frame... LGTM for the chrome frame changes.
Change committed as 89858 |