|
|
Chromium Code Reviews|
Created:
4 years, 8 months ago by davidben Modified:
4 years, 3 months ago Reviewers:
nasko CC:
chromium-reviews, loading-reviews_chromium.org, darin-cc_chromium.org, jam Base URL:
https://chromium.googlesource.com/chromium/src.git@master Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
DescriptionReport invalid URLs as ERR_INVALID_URL rather than ERR_ABORTED.
Before feeding a URL into the net stack, we check
ChildProcessSecurityPolicy. This means that URLRequest's internal
is_valid check hits the illegal URL codepath which currently reports
ERR_ABORTED. ERR_ABORTED is used for downloads and canceling requests
and gives no UI feedback.
Instead, map those failures to ERR_INVALID_URL (or ERR_ACCESS_DENIED
if the cause was illegal file upload data).
BUG=604779
TEST=Navigating to https://davidben.net/invalid_redirect gives an
error page.
Patch Set 1 #Patch Set 2 : Add a test #
Total comments: 10
Patch Set 3 : #
Total comments: 1
Messages
Total messages: 22 (3 generated)
Description was changed from ========== better error reporting TOOD tests, description BUG=604779 ========== to ========== Report invalid URLs as ERR_INVALID_URL rather than ERR_ABORTED. Before feeding a URL into the net stack, we check ChildProcessSecurityPolicy. This means that URLRequest's internal is_valid check hits the illegal URL codepath which currently reports ERR_ABORTED. ERR_ABORTED is used for downloads and canceling requests and gives no UI feedback. Instead, map those failures to ERR_INVALID_URL (or ERR_ACCESS_DENIED if the cause was illegal file upload data). BUG=604779 TEST=Navigating to https://davidben.net/invalid_redirect gives an error page. ==========
davidben@chromium.org changed reviewers: + mmenke@chromium.org, nasko@chromium.org
I think this is probably the best way to resolve this issue? Adding nasko to the review as well as mmenke since you might know more about ChildProcessSecurityPolicy and whether there's a reason we need it to show ERR_ABORTED and silently skip those navigations.
https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... File content/browser/loader/resource_dispatcher_host_impl.cc (right): https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... content/browser/loader/resource_dispatcher_host_impl.cc:228: void AbortRequestBeforeItStarts(ResourceMessageFilter* filter, Abort -> Fail? https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... content/browser/loader/resource_dispatcher_host_impl.cc:310: return net::ERR_INVALID_URL; This seem a fine error code for !url.is_valid(), but not for other errors. https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... content/browser/loader/resource_dispatcher_host_impl.cc:310: return net::ERR_INVALID_URL; Also, I think F5 / reload is the reason not to commit an error page with the new URL...And what about auto-reload? If we're concerned about invalid URLs, in particular, seems like we could just let them through, though I suppose then "smart" users could fix deliberately malformed urls (http;//bad.com, I guess?) Not sure what our security model is here.
https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... File content/browser/loader/resource_dispatcher_host_impl.cc (right): https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... content/browser/loader/resource_dispatcher_host_impl.cc:228: void AbortRequestBeforeItStarts(ResourceMessageFilter* filter, On 2016/04/21 15:28:15, mmenke wrote: > Abort -> Fail? Done. https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... content/browser/loader/resource_dispatcher_host_impl.cc:310: return net::ERR_INVALID_URL; On 2016/04/21 15:28:15, mmenke wrote: > This seem a fine error code for !url.is_valid(), but not for other errors. The other cases are things like magic schemes or special about:foo URLs. It seems reasonable to treat those as invalid URLs I think. https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... content/browser/loader/resource_dispatcher_host_impl.cc:310: return net::ERR_INVALID_URL; On 2016/04/21 15:28:15, mmenke wrote: > Also, I think F5 / reload is the reason not to commit an error page with the new > URL...And what about auto-reload? It doesn't commit with the new URL, actually. Just the old one. Auto-reload's fine I think? I don't know off-hand whether it blacklists ERR_INVALID_URL, but auto-reload's hits tons of errors where it'd be pointless to do so. I don't see a huge difference between this and any other URL. > If we're concerned about invalid URLs, in particular, seems like we could just > let them through, though I suppose then "smart" users could fix deliberately > malformed urls (http;//bad.com, I guess?) Not sure what our security model is > here. Blink seems to reject invalid URLs pretty early, actually. But I think responding with ERR_INVALID_URL instead of ERR_ABORTED should there be some codepath that doesn't seems pretty reasonable. I'm not a huge fan of letting them through. We'll expose more codepaths to invalid URLs and I'm not sure what it'd do.
https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... File content/browser/loader/resource_dispatcher_host_impl.cc (right): https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... content/browser/loader/resource_dispatcher_host_impl.cc:310: return net::ERR_INVALID_URL; On 2016/04/21 19:00:25, davidben wrote: > On 2016/04/21 15:28:15, mmenke wrote: > > Also, I think F5 / reload is the reason not to commit an error page with the > new > > URL...And what about auto-reload? > > It doesn't commit with the new URL, actually. Just the old one. Auto-reload's > fine I think? I don't know off-hand whether it blacklists ERR_INVALID_URL, but > auto-reload's hits tons of errors where it'd be pointless to do so. I don't see > a huge difference between this and any other URL. How does it know not to commit with the new URL? If I do something like document.location="http://0.0.0.0:7/spam" the failure URL does indeed commit...And if it commits the old URL, the user sees "http://evil.com" with an ERR_INVALID_URL, even if it tried to navigate to something it shouldn't. > > If we're concerned about invalid URLs, in particular, seems like we could just > > let them through, though I suppose then "smart" users could fix deliberately > > malformed urls (http;//bad.com, I guess?) Not sure what our security model is > > here. > > Blink seems to reject invalid URLs pretty early, actually. But I think > responding with ERR_INVALID_URL instead of ERR_ABORTED should there be some > codepath that doesn't seems pretty reasonable. > > I'm not a huge fan of letting them through. We'll expose more codepaths to > invalid URLs and I'm not sure what it'd do.
https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... File content/browser/loader/resource_dispatcher_host_impl.cc (right): https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... content/browser/loader/resource_dispatcher_host_impl.cc:310: return net::ERR_INVALID_URL; On 2016/04/21 19:10:05, mmenke wrote: > On 2016/04/21 19:00:25, davidben wrote: > > On 2016/04/21 15:28:15, mmenke wrote: > > > Also, I think F5 / reload is the reason not to commit an error page with the > > new > > > URL...And what about auto-reload? > > > > It doesn't commit with the new URL, actually. Just the old one. Auto-reload's > > fine I think? I don't know off-hand whether it blacklists ERR_INVALID_URL, but > > auto-reload's hits tons of errors where it'd be pointless to do so. I don't > see > > a huge difference between this and any other URL. > > How does it know not to commit with the new URL? If I do something like > document.location="http://0.0.0.0:7/spam" the failure URL does indeed > commit...And if it commits the old URL, the user sees "http://evil.com" with an > ERR_INVALID_URL, even if it tried to navigate to something it shouldn't. Oh hrm, that's true. I was thinking redirects where it commits the URL right before this one. Perhaps we should leave it as ERR_ABORTED then? Though it does say ERR_INVALID_URL and it's not like a network attacker can't cause random reasonable URLs to commit with network error pages. (Just trigger a network error.) nasko: Thoughts?
On 2016/04/21 19:20:05, davidben wrote: > https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... > File content/browser/loader/resource_dispatcher_host_impl.cc (right): > > https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... > content/browser/loader/resource_dispatcher_host_impl.cc:310: return > net::ERR_INVALID_URL; > On 2016/04/21 19:10:05, mmenke wrote: > > On 2016/04/21 19:00:25, davidben wrote: > > > On 2016/04/21 15:28:15, mmenke wrote: > > > > Also, I think F5 / reload is the reason not to commit an error page with > the > > > new > > > > URL...And what about auto-reload? > > > > > > It doesn't commit with the new URL, actually. Just the old one. > Auto-reload's > > > fine I think? I don't know off-hand whether it blacklists ERR_INVALID_URL, > but > > > auto-reload's hits tons of errors where it'd be pointless to do so. I don't > > see > > > a huge difference between this and any other URL. > > > > How does it know not to commit with the new URL? If I do something like > > document.location="http://0.0.0.0:7/spam" the failure URL does indeed > > commit...And if it commits the old URL, the user sees "http://evil.com" with > an > > ERR_INVALID_URL, even if it tried to navigate to something it shouldn't. > > Oh hrm, that's true. I was thinking redirects where it commits the URL right > before this one. Perhaps we should leave it as ERR_ABORTED then? Though it does > say ERR_INVALID_URL and it's not like a network attacker can't cause random > reasonable URLs to commit with network error pages. (Just trigger a network > error.) > > nasko: Thoughts? And just for the record: My thinking was we could just return an error on invalid URLs - heck, we could even let them through, if we wanted. We have invalid URL checks further down, which should catch them. https://code.google.com/p/chromium/codesearch#chromium/src/net/url_request/ur...
On 2016/04/21 19:29:27, mmenke wrote: > On 2016/04/21 19:20:05, davidben wrote: > > > https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... > > File content/browser/loader/resource_dispatcher_host_impl.cc (right): > > > > > https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... > > content/browser/loader/resource_dispatcher_host_impl.cc:310: return > > net::ERR_INVALID_URL; > > On 2016/04/21 19:10:05, mmenke wrote: > > > On 2016/04/21 19:00:25, davidben wrote: > > > > On 2016/04/21 15:28:15, mmenke wrote: > > > > > Also, I think F5 / reload is the reason not to commit an error page with > > the > > > > new > > > > > URL...And what about auto-reload? > > > > > > > > It doesn't commit with the new URL, actually. Just the old one. > > Auto-reload's > > > > fine I think? I don't know off-hand whether it blacklists ERR_INVALID_URL, > > but > > > > auto-reload's hits tons of errors where it'd be pointless to do so. I > don't > > > see > > > > a huge difference between this and any other URL. > > > > > > How does it know not to commit with the new URL? If I do something like > > > document.location="http://0.0.0.0:7/spam" the failure URL does indeed > > > commit...And if it commits the old URL, the user sees "http://evil.com" with > > an > > > ERR_INVALID_URL, even if it tried to navigate to something it shouldn't. > > > > Oh hrm, that's true. I was thinking redirects where it commits the URL right > > before this one. Perhaps we should leave it as ERR_ABORTED then? Though it > does > > say ERR_INVALID_URL and it's not like a network attacker can't cause random > > reasonable URLs to commit with network error pages. (Just trigger a network > > error.) > > > > nasko: Thoughts? > > And just for the record: My thinking was we could just return an error on > invalid URLs - heck, we could even let them through, if we wanted. We have > invalid URL checks further down, which should catch them. > https://code.google.com/p/chromium/codesearch#chromium/src/net/url_request/ur... Wouldn't that have the same problem? (Yeah, we have invalid URL checks all over the stack.)
On 2016/04/21 19:37:06, davidben wrote: > On 2016/04/21 19:29:27, mmenke wrote: > > On 2016/04/21 19:20:05, davidben wrote: > > > > > > https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... > > > File content/browser/loader/resource_dispatcher_host_impl.cc (right): > > > > > > > > > https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... > > > content/browser/loader/resource_dispatcher_host_impl.cc:310: return > > > net::ERR_INVALID_URL; > > > On 2016/04/21 19:10:05, mmenke wrote: > > > > On 2016/04/21 19:00:25, davidben wrote: > > > > > On 2016/04/21 15:28:15, mmenke wrote: > > > > > > Also, I think F5 / reload is the reason not to commit an error page > with > > > the > > > > > new > > > > > > URL...And what about auto-reload? > > > > > > > > > > It doesn't commit with the new URL, actually. Just the old one. > > > Auto-reload's > > > > > fine I think? I don't know off-hand whether it blacklists > ERR_INVALID_URL, > > > but > > > > > auto-reload's hits tons of errors where it'd be pointless to do so. I > > don't > > > > see > > > > > a huge difference between this and any other URL. > > > > > > > > How does it know not to commit with the new URL? If I do something like > > > > document.location="http://0.0.0.0:7/spam" the failure URL does indeed > > > > commit...And if it commits the old URL, the user sees "http://evil.com" > with > > > an > > > > ERR_INVALID_URL, even if it tried to navigate to something it shouldn't. > > > > > > Oh hrm, that's true. I was thinking redirects where it commits the URL right > > > before this one. Perhaps we should leave it as ERR_ABORTED then? Though it > > does > > > say ERR_INVALID_URL and it's not like a network attacker can't cause random > > > reasonable URLs to commit with network error pages. (Just trigger a network > > > error.) > > > > > > nasko: Thoughts? > > > > And just for the record: My thinking was we could just return an error on > > invalid URLs - heck, we could even let them through, if we wanted. We have > > invalid URL checks further down, which should catch them. > > > https://code.google.com/p/chromium/codesearch#chromium/src/net/url_request/ur... > > Wouldn't that have the same problem? (Yeah, we have invalid URL checks all over > the stack.) We'd be displaying the invalid URL in the omnibox and displaying a page with "ERR_INVALID_URL". Extensions would presumably see it, and other random stuff as well. I'm not sure if there's really a problem with that. Sure, we'd try and auto-reload them, but that should fail again as well. My main concern here is really with URLs blocked for some other reason.
On 2016/04/21 19:41:07, mmenke wrote: > On 2016/04/21 19:37:06, davidben wrote: > > On 2016/04/21 19:29:27, mmenke wrote: > > > On 2016/04/21 19:20:05, davidben wrote: > > > > > > > > > > https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... > > > > File content/browser/loader/resource_dispatcher_host_impl.cc (right): > > > > > > > > > > > > > > https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... > > > > content/browser/loader/resource_dispatcher_host_impl.cc:310: return > > > > net::ERR_INVALID_URL; > > > > On 2016/04/21 19:10:05, mmenke wrote: > > > > > On 2016/04/21 19:00:25, davidben wrote: > > > > > > On 2016/04/21 15:28:15, mmenke wrote: > > > > > > > Also, I think F5 / reload is the reason not to commit an error page > > with > > > > the > > > > > > new > > > > > > > URL...And what about auto-reload? > > > > > > > > > > > > It doesn't commit with the new URL, actually. Just the old one. > > > > Auto-reload's > > > > > > fine I think? I don't know off-hand whether it blacklists > > ERR_INVALID_URL, > > > > but > > > > > > auto-reload's hits tons of errors where it'd be pointless to do so. I > > > don't > > > > > see > > > > > > a huge difference between this and any other URL. > > > > > > > > > > How does it know not to commit with the new URL? If I do something like > > > > > document.location="http://0.0.0.0:7/spam" the failure URL does indeed > > > > > commit...And if it commits the old URL, the user sees "http://evil.com" > > with > > > > an > > > > > ERR_INVALID_URL, even if it tried to navigate to something it shouldn't. > > > > > > > > Oh hrm, that's true. I was thinking redirects where it commits the URL > right > > > > before this one. Perhaps we should leave it as ERR_ABORTED then? Though it > > > does > > > > say ERR_INVALID_URL and it's not like a network attacker can't cause > random > > > > reasonable URLs to commit with network error pages. (Just trigger a > network > > > > error.) > > > > > > > > nasko: Thoughts? > > > > > > And just for the record: My thinking was we could just return an error on > > > invalid URLs - heck, we could even let them through, if we wanted. We have > > > invalid URL checks further down, which should catch them. > > > > > > https://code.google.com/p/chromium/codesearch#chromium/src/net/url_request/ur... > > > > Wouldn't that have the same problem? (Yeah, we have invalid URL checks all > over > > the stack.) > > We'd be displaying the invalid URL in the omnibox and displaying a page with > "ERR_INVALID_URL". Extensions would presumably see it, and other random stuff > as well. I'm not sure if there's really a problem with that. Sure, we'd try > and auto-reload them, but that should fail again as well. My main concern here > is really with URLs blocked for some other reason. Oh, if we did that, we'd may have to be careful about where we pass the invalid URL - running platform URL handlers on an invalid URL, for instance, just doesn't seem like a great idea.
https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... File content/browser/loader/resource_dispatcher_host_impl.cc (right): https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... content/browser/loader/resource_dispatcher_host_impl.cc:310: return net::ERR_INVALID_URL; On 2016/04/21 19:20:04, davidben wrote: > On 2016/04/21 19:10:05, mmenke wrote: > > On 2016/04/21 19:00:25, davidben wrote: > > > On 2016/04/21 15:28:15, mmenke wrote: > > > > Also, I think F5 / reload is the reason not to commit an error page with > the > > > new > > > > URL...And what about auto-reload? > > > > > > It doesn't commit with the new URL, actually. Just the old one. > Auto-reload's > > > fine I think? I don't know off-hand whether it blacklists ERR_INVALID_URL, > but > > > auto-reload's hits tons of errors where it'd be pointless to do so. I don't > > see > > > a huge difference between this and any other URL. > > > > How does it know not to commit with the new URL? If I do something like > > document.location="http://0.0.0.0:7/spam" the failure URL does indeed > > commit...And if it commits the old URL, the user sees "http://evil.com" with > an > > ERR_INVALID_URL, even if it tried to navigate to something it shouldn't. > > Oh hrm, that's true. I was thinking redirects where it commits the URL right > before this one. Perhaps we should leave it as ERR_ABORTED then? Though it does > say ERR_INVALID_URL and it's not like a network attacker can't cause random > reasonable URLs to commit with network error pages. (Just trigger a network > error.) > > nasko: Thoughts? I had this comment typed up in a previous version of the patch: --- Can we make it a bit more generic? There are various ways this check can fail? ERR_NOT_AUTHORIZED? Or is this going to be more invasive change? I have a similar problem I'm facing in a different context, so having a generic error for "this renderer asked for something that it isn't allowed to access" is going to be nice! --- https://codereview.chromium.org/1903133004/diff/40001/content/browser/loader/... File content/browser/loader/resource_dispatcher_host_impl.cc (right): https://codereview.chromium.org/1903133004/diff/40001/content/browser/loader/... content/browser/loader/resource_dispatcher_host_impl.cc:292: // Consults the RendererSecurity policy to determine whether the nit: RendererSecurity doesn't exist anywhere but this comment.
https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... File content/browser/loader/resource_dispatcher_host_impl.cc (right): https://codereview.chromium.org/1903133004/diff/20001/content/browser/loader/... content/browser/loader/resource_dispatcher_host_impl.cc:310: return net::ERR_INVALID_URL; On 2016/04/21 20:30:00, nasko (slow) wrote: > On 2016/04/21 19:20:04, davidben wrote: > > On 2016/04/21 19:10:05, mmenke wrote: > > > On 2016/04/21 19:00:25, davidben wrote: > > > > On 2016/04/21 15:28:15, mmenke wrote: > > > > > Also, I think F5 / reload is the reason not to commit an error page with > > the > > > > new > > > > > URL...And what about auto-reload? > > > > > > > > It doesn't commit with the new URL, actually. Just the old one. > > Auto-reload's > > > > fine I think? I don't know off-hand whether it blacklists ERR_INVALID_URL, > > but > > > > auto-reload's hits tons of errors where it'd be pointless to do so. I > don't > > > see > > > > a huge difference between this and any other URL. > > > > > > How does it know not to commit with the new URL? If I do something like > > > document.location="http://0.0.0.0:7/spam" the failure URL does indeed > > > commit...And if it commits the old URL, the user sees "http://evil.com" with > > an > > > ERR_INVALID_URL, even if it tried to navigate to something it shouldn't. > > > > Oh hrm, that's true. I was thinking redirects where it commits the URL right > > before this one. Perhaps we should leave it as ERR_ABORTED then? Though it > does > > say ERR_INVALID_URL and it's not like a network attacker can't cause random > > reasonable URLs to commit with network error pages. (Just trigger a network > > error.) > > > > nasko: Thoughts? > > I had this comment typed up in a previous version of the patch: > --- > Can we make it a bit more generic? There are various ways this check can fail? > ERR_NOT_AUTHORIZED? Or is this going to be more invasive change? > I have a similar problem I'm facing in a different context, so having a generic > error for "this renderer asked for something that it isn't allowed to access" is > going to be nice! > --- There's ERR_ACCESS_DENIED. The nuisance is that this would fire even for invalid URLs which would look a little weird. That or we either special-case that *again* before calling CanRequestURL or make CanRequestURL return a net::Error (which looks like it'll be rather involved, though I can do it if you like). I figured that ERR_INVALID_URL worked for what I expected was the common case (GURL::is_valid failing), and it was half-plausible for the others too. If a webpage fetches a chrome-extension:// URL it's not allowed to access, some magical about:inducebrowsercrashforrealz URL, or something of the sort, acting as if we couldn't parse the URL seemed pretty sound. One could argue that those magic URLs are browser implementation details and rather than responding with "Okay, you got me. I *do* understand that URL. But I'm not gonna let you load it. Bye", we should stick our fingers in our ears with "What? I don't know what about:inducebrowsercrashforrealz means. Uh. You must have meant to dial somewhere else..." Whereas the ResourceRequestBody::Element check is clearly not a check on the URL, so ERR_ACCESS_DENIED seemed better. Caveat: I'm not really sure exactly what cases hit this. I was hoping you'd know it more globally. :-)
On 2016/04/21 19:41:07, mmenke wrote: > We'd be displaying the invalid URL in the omnibox and displaying a page with > "ERR_INVALID_URL". Extensions would presumably see it, and other random stuff > as well. I'm not sure if there's really a problem with that. Sure, we'd try > and auto-reload them, but that should fail again as well. My main concern here > is really with URLs blocked for some other reason. Oh hrm. Yeah, committing a blocked as a net error page may well cause badness when we go back and forward or reload or something. I'm now imagining if we got one of the funny about: URLs in there. nasko@, WDYT? Should we punt this and go back to ERR_ABORTED? Firefox has the same "bug" that this is trying to fix, so I'm not terribly attached to fixing it. (Though if it is a problem, that returning specifically ERR_ABORTED net error code is important is... distressing.)
On 2016/04/29 22:09:46, davidben (slow) wrote: > On 2016/04/21 19:41:07, mmenke wrote: > > We'd be displaying the invalid URL in the omnibox and displaying a page with > > "ERR_INVALID_URL". Extensions would presumably see it, and other random stuff > > as well. I'm not sure if there's really a problem with that. Sure, we'd try > > and auto-reload them, but that should fail again as well. My main concern > here > > is really with URLs blocked for some other reason. > > Oh hrm. Yeah, committing a blocked as a net error page may well cause badness > when we go back and forward or reload or something. I'm now imagining if we got > one of the funny about: URLs in there. > > nasko@, WDYT? Should we punt this and go back to ERR_ABORTED? Firefox has the > same "bug" that this is trying to fix, so I'm not terribly attached to fixing > it. (Though if it is a problem, that returning specifically ERR_ABORTED net > error code is important is... distressing.) Could we just return ERR_INVALID_URL for URLs that are actually invalid? I suppose there's still the issue of an invalid URL becoming valid across browser restarts...
On 2016/05/02 17:06:01, mmenke wrote: > On 2016/04/29 22:09:46, davidben (slow) wrote: > > On 2016/04/21 19:41:07, mmenke wrote: > > > We'd be displaying the invalid URL in the omnibox and displaying a page with > > > "ERR_INVALID_URL". Extensions would presumably see it, and other random > stuff > > > as well. I'm not sure if there's really a problem with that. Sure, we'd > try > > > and auto-reload them, but that should fail again as well. My main concern > > here > > > is really with URLs blocked for some other reason. > > > > Oh hrm. Yeah, committing a blocked as a net error page may well cause badness > > when we go back and forward or reload or something. I'm now imagining if we > got > > one of the funny about: URLs in there. > > > > nasko@, WDYT? Should we punt this and go back to ERR_ABORTED? Firefox has the > > same "bug" that this is trying to fix, so I'm not terribly attached to fixing > > it. (Though if it is a problem, that returning specifically ERR_ABORTED net > > error code is important is... distressing.) > > Could we just return ERR_INVALID_URL for URLs that are actually invalid? I > suppose there's still the issue of an invalid URL becoming valid across browser > restarts... We could maybe only do it on redirect, which will commit the old URL. It's kind of defensible: there's a difference between the renderer flat-out requesting a URL it's not allowed to (in which case giving it ERR_ABORTED or any other bizarre thing seems perfectly fair game) and the renderer fetching a URL it's allowed to fetch that happens to redirect to one it isn't.
On 2016/05/02 17:57:06, davidben (slow) wrote: > On 2016/05/02 17:06:01, mmenke wrote: > > On 2016/04/29 22:09:46, davidben (slow) wrote: > > > On 2016/04/21 19:41:07, mmenke wrote: > > > > We'd be displaying the invalid URL in the omnibox and displaying a page > with > > > > "ERR_INVALID_URL". Extensions would presumably see it, and other random > > stuff > > > > as well. I'm not sure if there's really a problem with that. Sure, we'd > > try > > > > and auto-reload them, but that should fail again as well. My main concern > > > here > > > > is really with URLs blocked for some other reason. > > > > > > Oh hrm. Yeah, committing a blocked as a net error page may well cause > badness > > > when we go back and forward or reload or something. I'm now imagining if we > > got > > > one of the funny about: URLs in there. > > > > > > nasko@, WDYT? Should we punt this and go back to ERR_ABORTED? Firefox has > the > > > same "bug" that this is trying to fix, so I'm not terribly attached to > fixing > > > it. (Though if it is a problem, that returning specifically ERR_ABORTED net > > > error code is important is... distressing.) > > > > Could we just return ERR_INVALID_URL for URLs that are actually invalid? I > > suppose there's still the issue of an invalid URL becoming valid across > browser > > restarts... > > We could maybe only do it on redirect, which will commit the old URL. It's kind > of defensible: there's a difference between the renderer flat-out requesting a > URL it's not allowed to (in which case giving it ERR_ABORTED or any other > bizarre thing seems perfectly fair game) and the renderer fetching a URL it's > allowed to fetch that happens to redirect to one it isn't. Are these really all that fundamentally different? I suppose one may require a compromised renderer trying to do bad things (Or a broken renderer), while the other represents a site trying to do bad things (Or a broken site), but in terms of whether to actually commit the URL we're being redirected to, I don't see a difference.
On 2016/05/02 19:01:26, mmenke wrote: > On 2016/05/02 17:57:06, davidben (slow) wrote: > > On 2016/05/02 17:06:01, mmenke wrote: > > > On 2016/04/29 22:09:46, davidben (slow) wrote: > > > > On 2016/04/21 19:41:07, mmenke wrote: > > > > > We'd be displaying the invalid URL in the omnibox and displaying a page > > with > > > > > "ERR_INVALID_URL". Extensions would presumably see it, and other random > > > stuff > > > > > as well. I'm not sure if there's really a problem with that. Sure, > we'd > > > try > > > > > and auto-reload them, but that should fail again as well. My main > concern > > > > here > > > > > is really with URLs blocked for some other reason. > > > > > > > > Oh hrm. Yeah, committing a blocked as a net error page may well cause > > badness > > > > when we go back and forward or reload or something. I'm now imagining if > we > > > got > > > > one of the funny about: URLs in there. > > > > > > > > nasko@, WDYT? Should we punt this and go back to ERR_ABORTED? Firefox has > > the > > > > same "bug" that this is trying to fix, so I'm not terribly attached to > > fixing > > > > it. (Though if it is a problem, that returning specifically ERR_ABORTED > net > > > > error code is important is... distressing.) > > > > > > Could we just return ERR_INVALID_URL for URLs that are actually invalid? I > > > suppose there's still the issue of an invalid URL becoming valid across > > browser > > > restarts... > > > > We could maybe only do it on redirect, which will commit the old URL. It's > kind > > of defensible: there's a difference between the renderer flat-out requesting a > > URL it's not allowed to (in which case giving it ERR_ABORTED or any other > > bizarre thing seems perfectly fair game) and the renderer fetching a URL it's > > allowed to fetch that happens to redirect to one it isn't. > > Are these really all that fundamentally different? I suppose one may require a > compromised renderer trying to do bad things (Or a broken renderer), while the > other represents a site trying to do bad things (Or a broken site), but in terms > of whether to actually commit the URL we're being redirected to, I don't see a > difference. We reject before following the redirect, so it seems we actually commit the old URL in that case.
On 2016/05/02 19:27:20, davidben (slow) wrote: > On 2016/05/02 19:01:26, mmenke wrote: > > On 2016/05/02 17:57:06, davidben (slow) wrote: > > > On 2016/05/02 17:06:01, mmenke wrote: > > > > On 2016/04/29 22:09:46, davidben (slow) wrote: > > > > > On 2016/04/21 19:41:07, mmenke wrote: > > > > > > We'd be displaying the invalid URL in the omnibox and displaying a > page > > > with > > > > > > "ERR_INVALID_URL". Extensions would presumably see it, and other > random > > > > stuff > > > > > > as well. I'm not sure if there's really a problem with that. Sure, > > we'd > > > > try > > > > > > and auto-reload them, but that should fail again as well. My main > > concern > > > > > here > > > > > > is really with URLs blocked for some other reason. > > > > > > > > > > Oh hrm. Yeah, committing a blocked as a net error page may well cause > > > badness > > > > > when we go back and forward or reload or something. I'm now imagining if > > we > > > > got > > > > > one of the funny about: URLs in there. > > > > > > > > > > nasko@, WDYT? Should we punt this and go back to ERR_ABORTED? Firefox > has > > > the > > > > > same "bug" that this is trying to fix, so I'm not terribly attached to > > > fixing > > > > > it. (Though if it is a problem, that returning specifically ERR_ABORTED > > net > > > > > error code is important is... distressing.) > > > > > > > > Could we just return ERR_INVALID_URL for URLs that are actually invalid? > I > > > > suppose there's still the issue of an invalid URL becoming valid across > > > browser > > > > restarts... > > > > > > We could maybe only do it on redirect, which will commit the old URL. It's > > kind > > > of defensible: there's a difference between the renderer flat-out requesting > a > > > URL it's not allowed to (in which case giving it ERR_ABORTED or any other > > > bizarre thing seems perfectly fair game) and the renderer fetching a URL > it's > > > allowed to fetch that happens to redirect to one it isn't. > > > > Are these really all that fundamentally different? I suppose one may require > a > > compromised renderer trying to do bad things (Or a broken renderer), while the > > other represents a site trying to do bad things (Or a broken site), but in > terms > > of whether to actually commit the URL we're being redirected to, I don't see a > > difference. > > We reject before following the redirect, so it seems we actually commit the old > URL in that case. Oh, right, I'm following now. That seems much safer, though a bit strange in how different it is from the non-redirect case.
Can we close this CL? Going to go ahead and remove myself from it - I just have too many dead CLs on my list, making keeping track of the non-dead ones more painful. Feel free to add my back if needed.
mmenke@chromium.org changed reviewers: - mmenke@chromium.org
On 2016/09/08 18:07:05, mmenke wrote: > Can we close this CL? Going to go ahead and remove myself from it - I just have > too many dead CLs on my list, making keeping track of the non-dead ones more > painful. Feel free to add my back if needed. Yeah, let's close it. It seems the proper fix will be different anyway. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
