|
|
Chromium Code Reviews|
Created:
7 years, 10 months ago by rmsousa Modified:
7 years, 9 months ago CC:
chromium-reviews, cbentzel+watch_chromium.org, darin-cc_chromium.org, Joao da Silva Base URL:
http://git.chromium.org/chromium/src.git@master Visibility:
Public. |
DescriptionFix UrlFetcher upload retry handling.
For static content, we need to store the content, and create UploadDataStreams on demand (so we basically have to have separate handling for static content and SetUploadDataStream)
For SetUploadDataStream, we either leave it as is but forbid it to be used with retry enabled (the approach on this CL), or the signature must be changed to take an UploadDataStreamFactory rather than a simple UploadDataStream.
BUG=175038
Patch Set 1 #
Total comments: 4
Patch Set 2 : Add DCHEC #Patch Set 3 : Check is_chunked_upload_ explicitly #Patch Set 4 : Explicitly forbid retries for chunked uploads #
Total comments: 2
Patch Set 5 : Revert http://crrev.com/178535 #
Total comments: 1
Messages
Total messages: 20 (0 generated)
Hi Matt, I don't know what's the exact use case for SetUploadDataStream, so I tried to leave it as is (and not allowing retries), and only reverted SetUploadData to the previous retry behavior. Let me know if you have another preferred way of dealing with retries for that case. Thanks, Renato
On 2013/02/08 05:03:59, rmsousa wrote: > Hi Matt, > > I don't know what's the exact use case for SetUploadDataStream, so I tried to > leave it as is (and not allowing retries), and only reverted SetUploadData to > the previous retry behavior. Let me know if you have another preferred way of > dealing with retries for that case. > > Thanks, > Renato Oh, hm, I hadn't thought about retries when I added that. One thought, since URLFetcher is intended to be a simpler interface, maybe it shouldn't expose UploadDataStream and should just have a SetUploadData and SetUploadFile. That way it could handle generating a new UploadDataStream for either one. What do you think, mmenke and hashimoto?
Wonder how important automatically retrying on 5xx errors is for POSTS. Looks like there are a lot of places where use this class for POSTS, otherwise, I'd suggest just getting rid of the behavior for them. https://codereview.chromium.org/12224066/diff/1/net/url_request/url_fetcher_c... File net/url_request/url_fetcher_core.cc (right): https://codereview.chromium.org/12224066/diff/1/net/url_request/url_fetcher_c... net/url_request/url_fetcher_core.cc:766: } This isn't right in the chunked upload case, I believe. In that case, we don't want to call set_upload. Also, retrying in the chunked upload case doesn't work at all.
On 2013/02/11 17:31:35, Matt Menke wrote: > Wonder how important automatically retrying on 5xx errors is for POSTS. Looks > like there are a lot of places where use this class for POSTS, otherwise, I'd > suggest just getting rid of the behavior for them. > > https://codereview.chromium.org/12224066/diff/1/net/url_request/url_fetcher_c... > File net/url_request/url_fetcher_core.cc (right): > > https://codereview.chromium.org/12224066/diff/1/net/url_request/url_fetcher_c... > net/url_request/url_fetcher_core.cc:766: } > This isn't right in the chunked upload case, I believe. In that case, we don't > want to call set_upload. > > Also, retrying in the chunked upload case doesn't work at all. One alternative approach would be to re-used the UploadDataStream for both requests - this would work with chunked uploads. UploadDataStreams already have to be restartable, due to redirects, so as long as we delete the old request first, should be fine.
On 2013/02/11 17:34:32, mmenke wrote: > On 2013/02/11 17:31:35, Matt Menke wrote: > > Wonder how important automatically retrying on 5xx errors is for POSTS. Looks > > like there are a lot of places where use this class for POSTS, otherwise, I'd > > suggest just getting rid of the behavior for them. There's a few cases where it's being used basically as a GET, except it's posting a small piece of sensitive information that it doesn't want in the URL (e.g. the Gaia OAuth flows). Or cases that use POSTs instead of GETs because of some sort of XSRF protection. I think those are worth retrying. > One alternative approach would be to re-used the UploadDataStream for both > requests - this would work with chunked uploads. > > UploadDataStreams already have to be restartable, due to redirects, so as long > as we delete the old request first, should be fine. URLRequest::set_upload takes a scoped_ptr, so we can't keep ownership of the object (it's used in quite a few places, so I don't think it's a good idea to change it to a scoped_refptr now). UploadDataStream also doesn't have a copy constructor, so we're kind of stuck with creating/"spending" one every time we call into URLRequest.
https://codereview.chromium.org/12224066/diff/1/net/url_request/url_fetcher_c... File net/url_request/url_fetcher_core.cc (right): https://codereview.chromium.org/12224066/diff/1/net/url_request/url_fetcher_c... net/url_request/url_fetcher_core.cc:766: } On 2013/02/11 17:31:35, mmenke wrote: > This isn't right in the chunked upload case, I believe. In that case, we don't > want to call set_upload. Chunked uploads aren't allowed to have empty content types, so they shouldn't get into this condition. Of course, DCHECKing it doesn't hurt.
> > One alternative approach would be to re-used the UploadDataStream for both > > requests - this would work with chunked uploads. > > > > UploadDataStreams already have to be restartable, due to redirects, so as long > > as we delete the old request first, should be fine. > > URLRequest::set_upload takes a scoped_ptr, so we can't keep ownership of the > object (it's used in quite a few places, so I don't think it's a good idea to > change it to a scoped_refptr now). UploadDataStream also doesn't have a copy > constructor, so we're kind of stuck with creating/"spending" one every time we > call into URLRequest. Right, so we'd either need to reclaim it, or have a way to maintain ownership. https://codereview.chromium.org/12224066/diff/1/net/url_request/url_fetcher_c... File net/url_request/url_fetcher_core.cc (right): https://codereview.chromium.org/12224066/diff/1/net/url_request/url_fetcher_c... net/url_request/url_fetcher_core.cc:766: } On 2013/02/11 22:57:54, rmsousa wrote: > On 2013/02/11 17:31:35, mmenke wrote: > > This isn't right in the chunked upload case, I believe. In that case, we > don't > > want to call set_upload. > > Chunked uploads aren't allowed to have empty content types, so they shouldn't > get into this condition. Of course, DCHECKing it doesn't hurt. > set_upload should never be called for chunked upload, but this code does, which is incorrect.
> Right, so we'd either need to reclaim it, or have a way to maintain ownership. Either option has its complications. There are other places that outside of net/url_request that use URLRequest::set_upload directly, and places that use URLRequest::get_upload to get a (non-scoped) UploadDataStream reference, then use that reference directly. To be able to reclaim the pointer we need to guarantee that URLRequest will never delete it or pass it somewhere else before we try to reclaim it from URLFetcher, and guarantee that URLFetcher itself always reclaims it before it deletes a URLRequest for a retry. Since there are other places in the code which use URLRequest::get_upload to get a direct (non-scoped) pointer to the upload data, this will all make the overall ownership/lifetime harder to manage. Likewise, if we change the signature for set_upload to a scoped_refptr (or some other kind of shared pointer), it'll be harder to enforce across the codebase the restriction that the UploadDataStream can only be used by one object at a time. https://codereview.chromium.org/12224066/diff/1/net/url_request/url_fetcher_c... File net/url_request/url_fetcher_core.cc (right): https://codereview.chromium.org/12224066/diff/1/net/url_request/url_fetcher_c... net/url_request/url_fetcher_core.cc:766: } On 2013/02/11 23:05:01, mmenke wrote: > On 2013/02/11 22:57:54, rmsousa wrote: > > On 2013/02/11 17:31:35, mmenke wrote: > > > This isn't right in the chunked upload case, I believe. In that case, we > > don't > > > want to call set_upload. > > > > Chunked uploads aren't allowed to have empty content types, so they shouldn't > > get into this condition. Of course, DCHECKing it doesn't hurt. > > > > set_upload should never be called for chunked upload, but this code does, which > is incorrect. What I meant is that the DCHECKS in the SetUpload* functions imply that chunked uploads always have non-empty upload_content_type and empty upload_content_upload and _data_stream_, so they will never fit the conditions in the if/else if. But yeah, we can always make the check explicit.
On 2013/02/12 00:18:28, rmsousa wrote: > > Right, so we'd either need to reclaim it, or have a way to maintain ownership. > > Either option has its complications. There are other places that outside of > net/url_request that use URLRequest::set_upload directly, and places that use > URLRequest::get_upload to get a (non-scoped) UploadDataStream reference, then > use that reference directly. > > To be able to reclaim the pointer we need to guarantee that URLRequest will > never delete it or pass it somewhere else before we try to reclaim it from > URLFetcher, and guarantee that URLFetcher itself always reclaims it before it > deletes a URLRequest for a retry. Since there are other places in the code which > use URLRequest::get_upload to get a direct (non-scoped) pointer to the upload > data, this will all make the overall ownership/lifetime harder to manage. > > Likewise, if we change the signature for set_upload to a scoped_refptr (or some > other kind of shared pointer), it'll be harder to enforce across the codebase > the restriction that the UploadDataStream can only be used by one object at a > time. We have exclusive ownership of the URLRequest. Only external class that can touch it, I believe, is the NetworkDelegate, which clearly cannot call get_upload after the URLRequest is destroyed (As the URLFetcherCore owns the URLRequest, which owns get_upload, no one else can ensure its existence). Anyways, if we're not going to allow automatic resuming of chunked uploads, I want to explicitly prohibit them. > https://codereview.chromium.org/12224066/diff/1/net/url_request/url_fetcher_c... > File net/url_request/url_fetcher_core.cc (right): > > https://codereview.chromium.org/12224066/diff/1/net/url_request/url_fetcher_c... > net/url_request/url_fetcher_core.cc:766: } > On 2013/02/11 23:05:01, mmenke wrote: > > On 2013/02/11 22:57:54, rmsousa wrote: > > > On 2013/02/11 17:31:35, mmenke wrote: > > > > This isn't right in the chunked upload case, I believe. In that case, we > > > don't > > > > want to call set_upload. > > > > > > Chunked uploads aren't allowed to have empty content types, so they > shouldn't > > > get into this condition. Of course, DCHECKing it doesn't hurt. > > > > > > > set_upload should never be called for chunked upload, but this code does, > which > > is incorrect. > > What I meant is that the DCHECKS in the SetUpload* functions imply that chunked > uploads always have non-empty upload_content_type and empty > upload_content_upload and _data_stream_, so they will never fit the conditions > in the if/else if. But yeah, we can always make the check explicit. Sorry, I missed that this was an "else if", and not an "else".
> We have exclusive ownership of the URLRequest. Only external class that can > touch it, I believe, is the NetworkDelegate, which clearly cannot call > get_upload after the URLRequest is destroyed (As the URLFetcherCore owns the > URLRequest, which owns get_upload, no one else can ensure its existence). We own the URLRequests that we create, but there is also other code elsewhere that creates their own URLRequests independently. So whatever solution we come up with must not break their URLRequest::set_upload() ownership semantics. If we try to create something for URLFetcher use only, like a scoped_ptr<UploadDataStream> URLRequest::take_back_upload() API, it would depend on URLRequest itself not destroying the upload data before the URLFetcher has a chance to take it back (which it currently does on redirects). > Anyways, if we're not going to allow automatic resuming of chunked uploads, I > want to explicitly prohibit them. Looking at the code, it seems like it retrying chunked uploads would still "sort of" work - If a request were retried, the chunks that were already appended to the previous request might be lost, but new AppendChunkToUpload calls would still be forwarded to the new request and work (i.e. old chunks might have been lost, but new chunks would still be sent). Anyway, nobody seems to be using that behavior right now, so it seems it's a good opportunity to explicitly prohibit retrying -- to ensure nobody tries to do it and gets some unexpected behavior.
On 2013/02/12 02:06:21, rmsousa wrote: > > We have exclusive ownership of the URLRequest. Only external class that can > > touch it, I believe, is the NetworkDelegate, which clearly cannot call > > get_upload after the URLRequest is destroyed (As the URLFetcherCore owns the > > URLRequest, which owns get_upload, no one else can ensure its existence). > > We own the URLRequests that we create, but there is also other code elsewhere > that creates their own URLRequests independently. So whatever solution we come > up with must not break their URLRequest::set_upload() ownership semantics. > > If we try to create something for URLFetcher use only, like a > scoped_ptr<UploadDataStream> URLRequest::take_back_upload() API, it would depend > on URLRequest itself not destroying the upload data before the URLFetcher has a > chance to take it back (which it currently does on redirects). > > > Anyways, if we're not going to allow automatic resuming of chunked uploads, I > > want to explicitly prohibit them. > > Looking at the code, it seems like it retrying chunked uploads would still "sort > of" work - If a request were retried, the chunks that were already appended to > the previous request might be lost, but new AppendChunkToUpload calls would > still be forwarded to the new request and work (i.e. old chunks might have been > lost, but new chunks would still be sent). This isn't quite accurate. We don't read a response when sending a body - so we'd send the entire body, and then send a new chunked upload. The new chunked upload would wait for a body, but never get one, since the consumer would think it's already done sending the body. > > Anyway, nobody seems to be using that behavior right now, so it seems it's a > good opportunity to explicitly prohibit retrying -- to ensure nobody tries to do > it and gets some unexpected behavior.
On 2013/02/12 02:06:21, rmsousa wrote: > > We have exclusive ownership of the URLRequest. Only external class that can > > touch it, I believe, is the NetworkDelegate, which clearly cannot call > > get_upload after the URLRequest is destroyed (As the URLFetcherCore owns the > > URLRequest, which owns get_upload, no one else can ensure its existence). > > We own the URLRequests that we create, but there is also other code elsewhere > that creates their own URLRequests independently. So whatever solution we come > up with must not break their URLRequest::set_upload() ownership semantics. > > If we try to create something for URLFetcher use only, like a > scoped_ptr<UploadDataStream> URLRequest::take_back_upload() API, it would depend > on URLRequest itself not destroying the upload data before the URLFetcher has a > chance to take it back (which it currently does on redirects). Oh, and calling set_upload after an upload already have been set will DCHECK, so nothing to worry about there, either. We'd want to either have a private function to do this and the class friend us, or have a function to provide an upload without passing ownership, again, which should probably be private. Neither is particularly pretty.
On 2013/02/12 00:18:28, rmsousa wrote: > > Right, so we'd either need to reclaim it, or have a way to maintain ownership. > > Either option has its complications. There are other places that outside of > net/url_request that use URLRequest::set_upload directly, and places that use > URLRequest::get_upload to get a (non-scoped) UploadDataStream reference, then > use that reference directly. > > To be able to reclaim the pointer we need to guarantee that URLRequest will > never delete it or pass it somewhere else before we try to reclaim it from > URLFetcher, and guarantee that URLFetcher itself always reclaims it before it > deletes a URLRequest for a retry. Since there are other places in the code which > use URLRequest::get_upload to get a direct (non-scoped) pointer to the upload > data, this will all make the overall ownership/lifetime harder to manage. Since there is no guarantee, no one should expect UploadDataStream returned by URLRequest::get_upload to live long, and it seems all users of get_upload are already following this rule. I think we don't need to mind about get_upload users. > > Likewise, if we change the signature for set_upload to a scoped_refptr (or some > other kind of shared pointer), it'll be harder to enforce across the codebase > the restriction that the UploadDataStream can only be used by one object at a > time. > > https://codereview.chromium.org/12224066/diff/1/net/url_request/url_fetcher_c... > File net/url_request/url_fetcher_core.cc (right): > > https://codereview.chromium.org/12224066/diff/1/net/url_request/url_fetcher_c... > net/url_request/url_fetcher_core.cc:766: } > On 2013/02/11 23:05:01, mmenke wrote: > > On 2013/02/11 22:57:54, rmsousa wrote: > > > On 2013/02/11 17:31:35, mmenke wrote: > > > > This isn't right in the chunked upload case, I believe. In that case, we > > > don't > > > > want to call set_upload. > > > > > > Chunked uploads aren't allowed to have empty content types, so they > shouldn't > > > get into this condition. Of course, DCHECKing it doesn't hurt. > > > > > > > set_upload should never be called for chunked upload, but this code does, > which > > is incorrect. > > What I meant is that the DCHECKS in the SetUpload* functions imply that chunked > uploads always have non-empty upload_content_type and empty > upload_content_upload and _data_stream_, so they will never fit the conditions > in the if/else if. But yeah, we can always make the check explicit.
On 2013/02/12 03:40:42, mmenke wrote: > On 2013/02/12 02:06:21, rmsousa wrote: > > > We have exclusive ownership of the URLRequest. Only external class that can > > > touch it, I believe, is the NetworkDelegate, which clearly cannot call > > > get_upload after the URLRequest is destroyed (As the URLFetcherCore owns the > > > URLRequest, which owns get_upload, no one else can ensure its existence). > > > > We own the URLRequests that we create, but there is also other code elsewhere > > that creates their own URLRequests independently. So whatever solution we come > > up with must not break their URLRequest::set_upload() ownership semantics. > > > > If we try to create something for URLFetcher use only, like a > > scoped_ptr<UploadDataStream> URLRequest::take_back_upload() API, it would > depend > > on URLRequest itself not destroying the upload data before the URLFetcher has > a > > chance to take it back (which it currently does on redirects). > > Oh, and calling set_upload after an upload already have been set will DCHECK, so > nothing to worry about there, either. > > We'd want to either have a private function to do this and the class friend us, > or have a function to provide an upload without passing ownership, again, which > should probably be private. Neither is particularly pretty. What I mean is something like this: https://code.google.com/searchframe#OAMlx_jo-ck/src/net/url_request/url_reque... URLRequest has a particular condition in which it destroys the upload data on redirects. If URLFetcher doesn't take ownership before URLRequest destroys the data, it won't have it if it decides to retry after the URLRequest finishes. If URLFetcher tries to take ownership on *every* redirect, we risk pulling the data from under the URLRequest in cases where it *wouldn't* have destroyed it (i.e. would want to reuse it after the redirect). We'd end up needing to either duplicate the logic inside the URLRequest's redirect handling, or needing to pass URLRequest a callback so that it can call it to give back the upload data whenever it would want to destroy it. I think we need a simple/safe fix for crbug.com/175038 (that can be merged into M26) - we can then follow up with a more complicated/risky change for M27 later.
On 2013/02/12 23:03:12, rmsousa wrote: > On 2013/02/12 03:40:42, mmenke wrote: > > On 2013/02/12 02:06:21, rmsousa wrote: > > > > We have exclusive ownership of the URLRequest. Only external class that > can > > > > touch it, I believe, is the NetworkDelegate, which clearly cannot call > > > > get_upload after the URLRequest is destroyed (As the URLFetcherCore owns > the > > > > URLRequest, which owns get_upload, no one else can ensure its existence). > > > > > > We own the URLRequests that we create, but there is also other code > elsewhere > > > that creates their own URLRequests independently. So whatever solution we > come > > > up with must not break their URLRequest::set_upload() ownership semantics. > > > > > > If we try to create something for URLFetcher use only, like a > > > scoped_ptr<UploadDataStream> URLRequest::take_back_upload() API, it would > > depend > > > on URLRequest itself not destroying the upload data before the URLFetcher > has > > a > > > chance to take it back (which it currently does on redirects). > > > > Oh, and calling set_upload after an upload already have been set will DCHECK, > so > > nothing to worry about there, either. > > > > We'd want to either have a private function to do this and the class friend > us, > > or have a function to provide an upload without passing ownership, again, > which > > should probably be private. Neither is particularly pretty. > > What I mean is something like this: > > https://code.google.com/searchframe#OAMlx_jo-ck/src/net/url_request/url_reque... > > URLRequest has a particular condition in which it destroys the upload data on > redirects. If URLFetcher doesn't take ownership before URLRequest destroys the > data, it won't have it if it decides to retry after the URLRequest finishes. If > URLFetcher tries to take ownership on *every* redirect, we risk pulling the data > from under the URLRequest in cases where it *wouldn't* have destroyed it (i.e. > would want to reuse it after the redirect). > > We'd end up needing to either duplicate the logic inside the URLRequest's > redirect handling, or needing to pass URLRequest a callback so that it can call > it to give back the upload data whenever it would want to destroy it. > > I think we need a simple/safe fix for crbug.com/175038 (that can be merged into > M26) - we can then follow up with a more complicated/risky change for M27 later. Does anything depend on the CL that broke this yet? If not, I'd suggest just reverting it for M26.
On 2013/02/13 02:54:47, mmenke wrote: > On 2013/02/12 23:03:12, rmsousa wrote: > > On 2013/02/12 03:40:42, mmenke wrote: > > > On 2013/02/12 02:06:21, rmsousa wrote: > > > > > We have exclusive ownership of the URLRequest. Only external class that > > can > > > > > touch it, I believe, is the NetworkDelegate, which clearly cannot call > > > > > get_upload after the URLRequest is destroyed (As the URLFetcherCore owns > > the > > > > > URLRequest, which owns get_upload, no one else can ensure its > existence). > > > > > > > > We own the URLRequests that we create, but there is also other code > > elsewhere > > > > that creates their own URLRequests independently. So whatever solution we > > come > > > > up with must not break their URLRequest::set_upload() ownership semantics. > > > > > > > > If we try to create something for URLFetcher use only, like a > > > > scoped_ptr<UploadDataStream> URLRequest::take_back_upload() API, it would > > > depend > > > > on URLRequest itself not destroying the upload data before the URLFetcher > > has > > > a > > > > chance to take it back (which it currently does on redirects). > > > > > > Oh, and calling set_upload after an upload already have been set will > DCHECK, > > so > > > nothing to worry about there, either. > > > > > > We'd want to either have a private function to do this and the class friend > > us, > > > or have a function to provide an upload without passing ownership, again, > > which > > > should probably be private. Neither is particularly pretty. > > > > What I mean is something like this: > > > > > https://code.google.com/searchframe#OAMlx_jo-ck/src/net/url_request/url_reque... > > > > URLRequest has a particular condition in which it destroys the upload data on > > redirects. If URLFetcher doesn't take ownership before URLRequest destroys the > > data, it won't have it if it decides to retry after the URLRequest finishes. > If > > URLFetcher tries to take ownership on *every* redirect, we risk pulling the > data > > from under the URLRequest in cases where it *wouldn't* have destroyed it (i.e. > > would want to reuse it after the redirect). > > > > We'd end up needing to either duplicate the logic inside the URLRequest's > > redirect handling, or needing to pass URLRequest a callback so that it can > call > > it to give back the upload data whenever it would want to destroy it. > > > > I think we need a simple/safe fix for crbug.com/175038 (that can be merged > into > > M26) - we can then follow up with a more complicated/risky change for M27 > later. > > Does anything depend on the CL that broke this yet? If not, I'd suggest just > reverting it for M26. Nothing yet, we can certainly revert it for now.
lgtm https://codereview.chromium.org/12224066/diff/7002/net/url_request/url_fetche... File net/url_request/url_fetcher_core.cc (right): https://codereview.chromium.org/12224066/diff/7002/net/url_request/url_fetche... net/url_request/url_fetcher_core.cc:349: DCHECK_EQ(max_retries_on_network_changes_, 0); Same for max_retries_on_network_changes_ https://codereview.chromium.org/12224066/diff/7002/net/url_request/url_fetche... net/url_request/url_fetcher_core.cc:368: DCHECK_EQ(max_retries_on_network_changes_, 0); Same here
Please ignore that lgtm, it was a misbehaved mouse :-) I just wanted to chime in to mention that this also fixes issue 175825. If this fix goes in then see the inline comments wrt max_retries_on_network_changes. Reverting http://crrev.com/178535 also fixes this for M26.
An automatic revert won't work - there's some stuff built on top of the change (even if it doesn't exactly use the feature that the change introduced) I've manually reverted the change, trying to preserve the unrelated fixes built on top of it (mostly making some conditions/CHECKs more lax). I also kept the unit test for the retry behavior. Please take a look. (note: I don't have committer status, so once this is in trunk someone with the appropriate superpowers will need to merge this back to M26) https://codereview.chromium.org/12224066/diff/16001/net/url_request/url_fetch... File net/url_request/url_fetcher_core.cc (left): https://codereview.chromium.org/12224066/diff/16001/net/url_request/url_fetch... net/url_request/url_fetcher_core.cc:754: DCHECK(is_chunked_upload_ || upload_content_); One of the changes that came after http://crrev.com/178535 was to explicitly allow empty content with empty content type (http://crrev.com/181039). To preserve that, I removed this DCHECK entirely rather than reverting to the old one.
Withdrawn, Matt already sent a revert for review: https://codereview.chromium.org/12261025 On 2013/02/13 23:05:08, rmsousa wrote: > An automatic revert won't work - there's some stuff built on top of the change > (even if it doesn't exactly use the feature that the change introduced) > I've manually reverted the change, trying to preserve the unrelated fixes built > on top of it (mostly making some conditions/CHECKs more lax). I also kept the > unit test for the retry behavior. > Please take a look. > (note: I don't have committer status, so once this is in trunk someone with the > appropriate superpowers will need to merge this back to M26) > > https://codereview.chromium.org/12224066/diff/16001/net/url_request/url_fetch... > File net/url_request/url_fetcher_core.cc (left): > > https://codereview.chromium.org/12224066/diff/16001/net/url_request/url_fetch... > net/url_request/url_fetcher_core.cc:754: DCHECK(is_chunked_upload_ || > upload_content_); > One of the changes that came after http://crrev.com/178535 was to explicitly > allow empty content with empty content type (http://crrev.com/181039). To > preserve that, I removed this DCHECK entirely rather than reverting to the old > one. |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
