|
|
Created:
5 years, 3 months ago by Charlie Harrison Modified:
5 years, 2 months ago CC:
blundell+watchlist_chromium.org, chromium-reviews, droger+watchlist_chromium.org, sdefresne+watchlist_chromium.org Base URL:
https://chromium.googlesource.com/chromium/src.git@hijack_bryans_cl Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
DescriptionAdded some browsertests to PageLoadMetrics change
BUG=382542
Committed: https://crrev.com/0b7f2aec3f912c47810b3d044eabbc320cbca170
Cr-Commit-Position: refs/heads/master@{#351603}
Patch Set 1 #Patch Set 2 : moved to chrome browsertest #
Total comments: 2
Patch Set 3 : add OWNERS file to chrome/browser/page_load_metrics #Patch Set 4 : Added ctor/dtor and DISALLOW COPY/ASSIGN to test #Patch Set 5 : updated histogram names #Messages
Total messages: 29 (5 generated)
csharrison@chromium.org changed reviewers: + bmcquade@chromium.org, jochen@chromium.org
@jochen, what is the correct way to proceed if a browser test depends on renderer and browser? I instrument the content shell with a RenderFrameObserver, and the shell's web contents in turn with a WebContentsObserver.
you can't depend on both renderer and browser. what are you trying to achieve with the renderer parts?
On 2015/09/29 07:35:18, jochen wrote: > you can't depend on both renderer and browser. > > what are you trying to achieve with the renderer parts? We want to write an end-to-end test to verify that our render and browser components interact with each other correctly. Could we create a new directory 'browsertests' and have that directory depend on both renderer and browser?
On 2015/09/29 at 11:39:10, bmcquade wrote: > On 2015/09/29 07:35:18, jochen wrote: > > you can't depend on both renderer and browser. > > > > what are you trying to achieve with the renderer parts? > > We want to write an end-to-end test to verify that our render and browser components interact with each other correctly. > > Could we create a new directory 'browsertests' and have that directory depend on both renderer and browser? You can't depend on both browser and renderer. Like, cannot, not just must not. You'll have to write your test such that the browser can verify that the interaction was correct.
On 2015/09/29 11:52:00, jochen wrote: > On 2015/09/29 at 11:39:10, bmcquade wrote: > > On 2015/09/29 07:35:18, jochen wrote: > > > you can't depend on both renderer and browser. > > > > > > what are you trying to achieve with the renderer parts? > > > > We want to write an end-to-end test to verify that our render and browser > components interact with each other correctly. > > > > Could we create a new directory 'browsertests' and have that directory depend > on both renderer and browser? > > You can't depend on both browser and renderer. Like, cannot, not just must not. > > You'll have to write your test such that the browser can verify that the > interaction was correct. Makes sense. Can you give more concrete guidance on how one would write an end to end test to verify that a render component sends an expected IPC, and that expected IPC is processed by a browser component? Seems like having tests to verify end to end behavior like this is rather important.
On 2015/09/29 12:40:29, Bryan McQuade wrote: > On 2015/09/29 11:52:00, jochen wrote: > > On 2015/09/29 at 11:39:10, bmcquade wrote: > > > On 2015/09/29 07:35:18, jochen wrote: > > > > you can't depend on both renderer and browser. > > > > > > > > what are you trying to achieve with the renderer parts? > > > > > > We want to write an end-to-end test to verify that our render and browser > > components interact with each other correctly. > > > > > > Could we create a new directory 'browsertests' and have that directory > depend > > on both renderer and browser? > > > > You can't depend on both browser and renderer. Like, cannot, not just must > not. > > > > You'll have to write your test such that the browser can verify that the > > interaction was correct. > > Makes sense. Can you give more concrete guidance on how one would write an end > to end test to verify that a render component sends an expected IPC, and that > expected IPC is processed by a browser component? Seems like having tests to > verify end to end behavior like this is rather important. (drive-by) Obviously I don't have full context here, so take my comments for what you think they're worth: - Those individual expectations sound like they could be verified by unit tests of the renderer- and browser-side code respectively. - The browser test could then be simply something that sets up the interaction (e.g., loading a webpage) and verifies that the expected high-level result occurred (e.g., that page load metrics were incremented successfully).
On 2015/09/29 12:44:35, blundell wrote: > On 2015/09/29 12:40:29, Bryan McQuade wrote: > > On 2015/09/29 11:52:00, jochen wrote: > > > On 2015/09/29 at 11:39:10, bmcquade wrote: > > > > On 2015/09/29 07:35:18, jochen wrote: > > > > > you can't depend on both renderer and browser. > > > > > > > > > > what are you trying to achieve with the renderer parts? > > > > > > > > We want to write an end-to-end test to verify that our render and browser > > > components interact with each other correctly. > > > > > > > > Could we create a new directory 'browsertests' and have that directory > > depend > > > on both renderer and browser? > > > > > > You can't depend on both browser and renderer. Like, cannot, not just must > > not. > > > > > > You'll have to write your test such that the browser can verify that the > > > interaction was correct. > > > > Makes sense. Can you give more concrete guidance on how one would write an end > > to end test to verify that a render component sends an expected IPC, and that > > expected IPC is processed by a browser component? Seems like having tests to > > verify end to end behavior like this is rather important. > > (drive-by) > > Obviously I don't have full context here, so take my comments for what you think > they're worth: > > - Those individual expectations sound like they could be verified by unit tests > of the renderer- and browser-side code respectively. > - The browser test could then be simply something that sets up the interaction > (e.g., loading a webpage) and verifies that the expected high-level result > occurred (e.g., that page load metrics were incremented successfully). Unit tests are an important piece, but do not verify end to end correctness. We have unit tests that verify expectations in our render component: that the render component sends expected IPCs to the browser. We have unit tests that verify expectations in our browser component: that the browser component processes incoming IPCs from the renderer. However we don't have a test to e2e verify that IPCs generated by the render component are correctly delivered and processed by the browser component. This is important in addition to unit tests. Imagine, for example, that some other component starts affecting IPC delivery in a way that our IPCs are not routed correctly from render to browser component in production chrome. Our existing unit tests continue to work, but things silently break in production. I'd like some level of testing to verify end to end correctness so we could catch such regressions. I'd rather not wait to find out about the breakage after Chrome has been pushed to users with a bug that breaks our component. If there's a way to write a full end to end Chrome test that brings up a Chrome instance, drives it, and verifies that certain histograms get logged, that would be ok for us as well. Is there a different mechanism to do this kind of end to end testing that other teams have used to verify end to end browser behavior?
On 2015/09/29 13:17:27, Bryan McQuade wrote: > On 2015/09/29 12:44:35, blundell wrote: > > On 2015/09/29 12:40:29, Bryan McQuade wrote: > > > On 2015/09/29 11:52:00, jochen wrote: > > > > On 2015/09/29 at 11:39:10, bmcquade wrote: > > > > > On 2015/09/29 07:35:18, jochen wrote: > > > > > > you can't depend on both renderer and browser. > > > > > > > > > > > > what are you trying to achieve with the renderer parts? > > > > > > > > > > We want to write an end-to-end test to verify that our render and > browser > > > > components interact with each other correctly. > > > > > > > > > > Could we create a new directory 'browsertests' and have that directory > > > depend > > > > on both renderer and browser? > > > > > > > > You can't depend on both browser and renderer. Like, cannot, not just must > > > not. > > > > > > > > You'll have to write your test such that the browser can verify that the > > > > interaction was correct. > > > > > > Makes sense. Can you give more concrete guidance on how one would write an > end > > > to end test to verify that a render component sends an expected IPC, and > that > > > expected IPC is processed by a browser component? Seems like having tests to > > > verify end to end behavior like this is rather important. > > > > (drive-by) > > > > Obviously I don't have full context here, so take my comments for what you > think > > they're worth: > > > > - Those individual expectations sound like they could be verified by unit > tests > > of the renderer- and browser-side code respectively. > > - The browser test could then be simply something that sets up the interaction > > (e.g., loading a webpage) and verifies that the expected high-level result > > occurred (e.g., that page load metrics were incremented successfully). > > Unit tests are an important piece, but do not verify end to end correctness. > > We have unit tests that verify expectations in our render component: that the > render component sends expected IPCs to the browser. > > We have unit tests that verify expectations in our browser component: that the > browser component processes incoming IPCs from the renderer. > > However we don't have a test to e2e verify that IPCs generated by the render > component are correctly delivered and processed by the browser component. This > is important in addition to unit tests. Imagine, for example, that some other > component starts affecting IPC delivery in a way that our IPCs are not routed > correctly from render to browser component in production chrome. Our existing > unit tests continue to work, but things silently break in production. I'd like > some level of testing to verify end to end correctness so we could catch such > regressions. I'd rather not wait to find out about the breakage after Chrome has > been pushed to users with a bug that breaks our component. > > If there's a way to write a full end to end Chrome test that brings up a Chrome > instance, drives it, and verifies that certain histograms get logged, that would > be ok for us as well. Is there a different mechanism to do this kind of end to > end testing that other teams have used to verify end to end browser behavior? The problem is that the content shell does not set up a MetricsWebContentsObserver to drive IPCs from the renderer, so we attach both render and web contents observers in one place. Another possible solution would be to attach an Observer earlier in the stack, so all shell's have the Observer attached.
On 2015/09/29 13:17:27, Bryan McQuade wrote: > On 2015/09/29 12:44:35, blundell wrote: > > On 2015/09/29 12:40:29, Bryan McQuade wrote: > > > On 2015/09/29 11:52:00, jochen wrote: > > > > On 2015/09/29 at 11:39:10, bmcquade wrote: > > > > > On 2015/09/29 07:35:18, jochen wrote: > > > > > > you can't depend on both renderer and browser. > > > > > > > > > > > > what are you trying to achieve with the renderer parts? > > > > > > > > > > We want to write an end-to-end test to verify that our render and > browser > > > > components interact with each other correctly. > > > > > > > > > > Could we create a new directory 'browsertests' and have that directory > > > depend > > > > on both renderer and browser? > > > > > > > > You can't depend on both browser and renderer. Like, cannot, not just must > > > not. > > > > > > > > You'll have to write your test such that the browser can verify that the > > > > interaction was correct. > > > > > > Makes sense. Can you give more concrete guidance on how one would write an > end > > > to end test to verify that a render component sends an expected IPC, and > that > > > expected IPC is processed by a browser component? Seems like having tests to > > > verify end to end behavior like this is rather important. > > > > (drive-by) > > > > Obviously I don't have full context here, so take my comments for what you > think > > they're worth: > > > > - Those individual expectations sound like they could be verified by unit > tests > > of the renderer- and browser-side code respectively. > > - The browser test could then be simply something that sets up the interaction > > (e.g., loading a webpage) and verifies that the expected high-level result > > occurred (e.g., that page load metrics were incremented successfully). > > Unit tests are an important piece, but do not verify end to end correctness. > > We have unit tests that verify expectations in our render component: that the > render component sends expected IPCs to the browser. > > We have unit tests that verify expectations in our browser component: that the > browser component processes incoming IPCs from the renderer. > > However we don't have a test to e2e verify that IPCs generated by the render > component are correctly delivered and processed by the browser component. This > is important in addition to unit tests. Imagine, for example, that some other > component starts affecting IPC delivery in a way that our IPCs are not routed > correctly from render to browser component in production chrome. Our existing > unit tests continue to work, but things silently break in production. I'd like > some level of testing to verify end to end correctness so we could catch such > regressions. I'd rather not wait to find out about the breakage after Chrome has > been pushed to users with a bug that breaks our component. > > If there's a way to write a full end to end Chrome test that brings up a Chrome > instance, drives it, and verifies that certain histograms get logged, that would > be ok for us as well. Is there a different mechanism to do this kind of end to > end testing that other teams have used to verify end to end browser behavior? Apologies for the confusion. I didn't mean to imply that an integration test (i.e., browsertest) wasn't important here; just that the individual unittests could verify the component parts, and the browsertest could then just verify the high-level result (i.e., the metrics being logged after a page load). I'm not an expert on creating browsertests, but my guess is that you should be able to do something similar to e.g. //components/dom_distiller/content/browser/distillable_page_utils_browsertest.cc: - Load a URL (see the LoadURL() function in that file). - Verify whatever post-condition you want to, e.g. that some specific metrics were logged.
On 2015/09/29 13:33:11, csharrison wrote: > On 2015/09/29 13:17:27, Bryan McQuade wrote: > > On 2015/09/29 12:44:35, blundell wrote: > > > On 2015/09/29 12:40:29, Bryan McQuade wrote: > > > > On 2015/09/29 11:52:00, jochen wrote: > > > > > On 2015/09/29 at 11:39:10, bmcquade wrote: > > > > > > On 2015/09/29 07:35:18, jochen wrote: > > > > > > > you can't depend on both renderer and browser. > > > > > > > > > > > > > > what are you trying to achieve with the renderer parts? > > > > > > > > > > > > We want to write an end-to-end test to verify that our render and > > browser > > > > > components interact with each other correctly. > > > > > > > > > > > > Could we create a new directory 'browsertests' and have that directory > > > > depend > > > > > on both renderer and browser? > > > > > > > > > > You can't depend on both browser and renderer. Like, cannot, not just > must > > > > not. > > > > > > > > > > You'll have to write your test such that the browser can verify that the > > > > > interaction was correct. > > > > > > > > Makes sense. Can you give more concrete guidance on how one would write an > > end > > > > to end test to verify that a render component sends an expected IPC, and > > that > > > > expected IPC is processed by a browser component? Seems like having tests > to > > > > verify end to end behavior like this is rather important. > > > > > > (drive-by) > > > > > > Obviously I don't have full context here, so take my comments for what you > > think > > > they're worth: > > > > > > - Those individual expectations sound like they could be verified by unit > > tests > > > of the renderer- and browser-side code respectively. > > > - The browser test could then be simply something that sets up the > interaction > > > (e.g., loading a webpage) and verifies that the expected high-level result > > > occurred (e.g., that page load metrics were incremented successfully). > > > > Unit tests are an important piece, but do not verify end to end correctness. > > > > We have unit tests that verify expectations in our render component: that the > > render component sends expected IPCs to the browser. > > > > We have unit tests that verify expectations in our browser component: that the > > browser component processes incoming IPCs from the renderer. > > > > However we don't have a test to e2e verify that IPCs generated by the render > > component are correctly delivered and processed by the browser component. This > > is important in addition to unit tests. Imagine, for example, that some other > > component starts affecting IPC delivery in a way that our IPCs are not routed > > correctly from render to browser component in production chrome. Our existing > > unit tests continue to work, but things silently break in production. I'd like > > some level of testing to verify end to end correctness so we could catch such > > regressions. I'd rather not wait to find out about the breakage after Chrome > has > > been pushed to users with a bug that breaks our component. > > > > If there's a way to write a full end to end Chrome test that brings up a > Chrome > > instance, drives it, and verifies that certain histograms get logged, that > would > > be ok for us as well. Is there a different mechanism to do this kind of end to > > end testing that other teams have used to verify end to end browser behavior? > > The problem is that the content shell does not set up a > MetricsWebContentsObserver to drive IPCs from the renderer, so we attach both > render and web contents observers in one place. Another possible solution would > be to attach an Observer earlier in the stack, so all shell's have the Observer > attached. Our replies crossed here. Is the issue that the content shell has neither the WCO nor the render-side observer set up? If so, it seems like maybe this should be a //chrome-level browsertest, since it's testing full-system functionality that exists at the //chrome level?
On 2015/09/29 13:40:15, blundell wrote: > On 2015/09/29 13:33:11, csharrison wrote: > > On 2015/09/29 13:17:27, Bryan McQuade wrote: > > > On 2015/09/29 12:44:35, blundell wrote: > > > > On 2015/09/29 12:40:29, Bryan McQuade wrote: > > > > > On 2015/09/29 11:52:00, jochen wrote: > > > > > > On 2015/09/29 at 11:39:10, bmcquade wrote: > > > > > > > On 2015/09/29 07:35:18, jochen wrote: > > > > > > > > you can't depend on both renderer and browser. > > > > > > > > > > > > > > > > what are you trying to achieve with the renderer parts? > > > > > > > > > > > > > > We want to write an end-to-end test to verify that our render and > > > browser > > > > > > components interact with each other correctly. > > > > > > > > > > > > > > Could we create a new directory 'browsertests' and have that > directory > > > > > depend > > > > > > on both renderer and browser? > > > > > > > > > > > > You can't depend on both browser and renderer. Like, cannot, not just > > must > > > > > not. > > > > > > > > > > > > You'll have to write your test such that the browser can verify that > the > > > > > > interaction was correct. > > > > > > > > > > Makes sense. Can you give more concrete guidance on how one would write > an > > > end > > > > > to end test to verify that a render component sends an expected IPC, and > > > that > > > > > expected IPC is processed by a browser component? Seems like having > tests > > to > > > > > verify end to end behavior like this is rather important. > > > > > > > > (drive-by) > > > > > > > > Obviously I don't have full context here, so take my comments for what you > > > think > > > > they're worth: > > > > > > > > - Those individual expectations sound like they could be verified by unit > > > tests > > > > of the renderer- and browser-side code respectively. > > > > - The browser test could then be simply something that sets up the > > interaction > > > > (e.g., loading a webpage) and verifies that the expected high-level result > > > > occurred (e.g., that page load metrics were incremented successfully). > > > > > > Unit tests are an important piece, but do not verify end to end correctness. > > > > > > We have unit tests that verify expectations in our render component: that > the > > > render component sends expected IPCs to the browser. > > > > > > We have unit tests that verify expectations in our browser component: that > the > > > browser component processes incoming IPCs from the renderer. > > > > > > However we don't have a test to e2e verify that IPCs generated by the render > > > component are correctly delivered and processed by the browser component. > This > > > is important in addition to unit tests. Imagine, for example, that some > other > > > component starts affecting IPC delivery in a way that our IPCs are not > routed > > > correctly from render to browser component in production chrome. Our > existing > > > unit tests continue to work, but things silently break in production. I'd > like > > > some level of testing to verify end to end correctness so we could catch > such > > > regressions. I'd rather not wait to find out about the breakage after Chrome > > has > > > been pushed to users with a bug that breaks our component. > > > > > > If there's a way to write a full end to end Chrome test that brings up a > > Chrome > > > instance, drives it, and verifies that certain histograms get logged, that > > would > > > be ok for us as well. Is there a different mechanism to do this kind of end > to > > > end testing that other teams have used to verify end to end browser > behavior? > > > > The problem is that the content shell does not set up a > > MetricsWebContentsObserver to drive IPCs from the renderer, so we attach both > > render and web contents observers in one place. Another possible solution > would > > be to attach an Observer earlier in the stack, so all shell's have the > Observer > > attached. > > Our replies crossed here. Is the issue that the content shell has neither the > WCO nor the render-side observer set up? If so, it seems like maybe this should > be a //chrome-level browsertest, since it's testing full-system functionality > that exists at the //chrome level? Yeah that's the real problem. I think this could be set up as a chrome level browsertest if that's the right move here.
On 2015/09/29 13:57:10, csharrison wrote: > On 2015/09/29 13:40:15, blundell wrote: > > On 2015/09/29 13:33:11, csharrison wrote: > > > On 2015/09/29 13:17:27, Bryan McQuade wrote: > > > > On 2015/09/29 12:44:35, blundell wrote: > > > > > On 2015/09/29 12:40:29, Bryan McQuade wrote: > > > > > > On 2015/09/29 11:52:00, jochen wrote: > > > > > > > On 2015/09/29 at 11:39:10, bmcquade wrote: > > > > > > > > On 2015/09/29 07:35:18, jochen wrote: > > > > > > > > > you can't depend on both renderer and browser. > > > > > > > > > > > > > > > > > > what are you trying to achieve with the renderer parts? > > > > > > > > > > > > > > > > We want to write an end-to-end test to verify that our render and > > > > browser > > > > > > > components interact with each other correctly. > > > > > > > > > > > > > > > > Could we create a new directory 'browsertests' and have that > > directory > > > > > > depend > > > > > > > on both renderer and browser? > > > > > > > > > > > > > > You can't depend on both browser and renderer. Like, cannot, not > just > > > must > > > > > > not. > > > > > > > > > > > > > > You'll have to write your test such that the browser can verify that > > the > > > > > > > interaction was correct. > > > > > > > > > > > > Makes sense. Can you give more concrete guidance on how one would > write > > an > > > > end > > > > > > to end test to verify that a render component sends an expected IPC, > and > > > > that > > > > > > expected IPC is processed by a browser component? Seems like having > > tests > > > to > > > > > > verify end to end behavior like this is rather important. > > > > > > > > > > (drive-by) > > > > > > > > > > Obviously I don't have full context here, so take my comments for what > you > > > > think > > > > > they're worth: > > > > > > > > > > - Those individual expectations sound like they could be verified by > unit > > > > tests > > > > > of the renderer- and browser-side code respectively. > > > > > - The browser test could then be simply something that sets up the > > > interaction > > > > > (e.g., loading a webpage) and verifies that the expected high-level > result > > > > > occurred (e.g., that page load metrics were incremented successfully). > > > > > > > > Unit tests are an important piece, but do not verify end to end > correctness. > > > > > > > > We have unit tests that verify expectations in our render component: that > > the > > > > render component sends expected IPCs to the browser. > > > > > > > > We have unit tests that verify expectations in our browser component: that > > the > > > > browser component processes incoming IPCs from the renderer. > > > > > > > > However we don't have a test to e2e verify that IPCs generated by the > render > > > > component are correctly delivered and processed by the browser component. > > This > > > > is important in addition to unit tests. Imagine, for example, that some > > other > > > > component starts affecting IPC delivery in a way that our IPCs are not > > routed > > > > correctly from render to browser component in production chrome. Our > > existing > > > > unit tests continue to work, but things silently break in production. I'd > > like > > > > some level of testing to verify end to end correctness so we could catch > > such > > > > regressions. I'd rather not wait to find out about the breakage after > Chrome > > > has > > > > been pushed to users with a bug that breaks our component. > > > > > > > > If there's a way to write a full end to end Chrome test that brings up a > > > Chrome > > > > instance, drives it, and verifies that certain histograms get logged, that > > > would > > > > be ok for us as well. Is there a different mechanism to do this kind of > end > > to > > > > end testing that other teams have used to verify end to end browser > > behavior? > > > > > > The problem is that the content shell does not set up a > > > MetricsWebContentsObserver to drive IPCs from the renderer, so we attach > both > > > render and web contents observers in one place. Another possible solution > > would > > > be to attach an Observer earlier in the stack, so all shell's have the > > Observer > > > attached. > > > > Our replies crossed here. Is the issue that the content shell has neither the > > WCO nor the render-side observer set up? If so, it seems like maybe this > should > > be a //chrome-level browsertest, since it's testing full-system functionality > > that exists at the //chrome level? > > Yeah that's the real problem. I think this could be set up as a chrome level > browsertest if that's the right move here. OK, I understand better now :). //components-level browsertests are kind of a funny beast, since they're presumably testing whole-system functionality that's not in //content and yet there's no //components-level whole-system "thing" to test. In this case it seems appropriate for the browsertest to be in //chrome since the only instantiation of the MetricsRenderFrameObserver in a product is in //chrome.
On 2015/09/29 14:01:12, blundell wrote: > On 2015/09/29 13:57:10, csharrison wrote: > > On 2015/09/29 13:40:15, blundell wrote: > > > On 2015/09/29 13:33:11, csharrison wrote: > > > > On 2015/09/29 13:17:27, Bryan McQuade wrote: > > > > > On 2015/09/29 12:44:35, blundell wrote: > > > > > > On 2015/09/29 12:40:29, Bryan McQuade wrote: > > > > > > > On 2015/09/29 11:52:00, jochen wrote: > > > > > > > > On 2015/09/29 at 11:39:10, bmcquade wrote: > > > > > > > > > On 2015/09/29 07:35:18, jochen wrote: > > > > > > > > > > you can't depend on both renderer and browser. > > > > > > > > > > > > > > > > > > > > what are you trying to achieve with the renderer parts? > > > > > > > > > > > > > > > > > > We want to write an end-to-end test to verify that our render > and > > > > > browser > > > > > > > > components interact with each other correctly. > > > > > > > > > > > > > > > > > > Could we create a new directory 'browsertests' and have that > > > directory > > > > > > > depend > > > > > > > > on both renderer and browser? > > > > > > > > > > > > > > > > You can't depend on both browser and renderer. Like, cannot, not > > just > > > > must > > > > > > > not. > > > > > > > > > > > > > > > > You'll have to write your test such that the browser can verify > that > > > the > > > > > > > > interaction was correct. > > > > > > > > > > > > > > Makes sense. Can you give more concrete guidance on how one would > > write > > > an > > > > > end > > > > > > > to end test to verify that a render component sends an expected IPC, > > and > > > > > that > > > > > > > expected IPC is processed by a browser component? Seems like having > > > tests > > > > to > > > > > > > verify end to end behavior like this is rather important. > > > > > > > > > > > > (drive-by) > > > > > > > > > > > > Obviously I don't have full context here, so take my comments for what > > you > > > > > think > > > > > > they're worth: > > > > > > > > > > > > - Those individual expectations sound like they could be verified by > > unit > > > > > tests > > > > > > of the renderer- and browser-side code respectively. > > > > > > - The browser test could then be simply something that sets up the > > > > interaction > > > > > > (e.g., loading a webpage) and verifies that the expected high-level > > result > > > > > > occurred (e.g., that page load metrics were incremented successfully). > > > > > > > > > > Unit tests are an important piece, but do not verify end to end > > correctness. > > > > > > > > > > We have unit tests that verify expectations in our render component: > that > > > the > > > > > render component sends expected IPCs to the browser. > > > > > > > > > > We have unit tests that verify expectations in our browser component: > that > > > the > > > > > browser component processes incoming IPCs from the renderer. > > > > > > > > > > However we don't have a test to e2e verify that IPCs generated by the > > render > > > > > component are correctly delivered and processed by the browser > component. > > > This > > > > > is important in addition to unit tests. Imagine, for example, that some > > > other > > > > > component starts affecting IPC delivery in a way that our IPCs are not > > > routed > > > > > correctly from render to browser component in production chrome. Our > > > existing > > > > > unit tests continue to work, but things silently break in production. > I'd > > > like > > > > > some level of testing to verify end to end correctness so we could catch > > > such > > > > > regressions. I'd rather not wait to find out about the breakage after > > Chrome > > > > has > > > > > been pushed to users with a bug that breaks our component. > > > > > > > > > > If there's a way to write a full end to end Chrome test that brings up a > > > > Chrome > > > > > instance, drives it, and verifies that certain histograms get logged, > that > > > > would > > > > > be ok for us as well. Is there a different mechanism to do this kind of > > end > > > to > > > > > end testing that other teams have used to verify end to end browser > > > behavior? > > > > > > > > The problem is that the content shell does not set up a > > > > MetricsWebContentsObserver to drive IPCs from the renderer, so we attach > > both > > > > render and web contents observers in one place. Another possible solution > > > would > > > > be to attach an Observer earlier in the stack, so all shell's have the > > > Observer > > > > attached. > > > > > > Our replies crossed here. Is the issue that the content shell has neither > the > > > WCO nor the render-side observer set up? If so, it seems like maybe this > > should > > > be a //chrome-level browsertest, since it's testing full-system > functionality > > > that exists at the //chrome level? > > > > Yeah that's the real problem. I think this could be set up as a chrome level > > browsertest if that's the right move here. > > OK, I understand better now :). //components-level browsertests are kind of a > funny beast, since they're presumably testing whole-system functionality that's > not in //content and yet there's no //components-level whole-system "thing" to > test. In this case it seems appropriate for the browsertest to be in //chrome > since the only instantiation of the MetricsRenderFrameObserver in a product is > in //chrome. Thanks for the advice that really clears things up. I'll update the tests.
Moving to chrome/ worked out. @jochen PTAL?
Wow, this is great. Glad to see this working when moved to chrome/. Can we add a per-file owners entry for this new file, to match our owners in the component? Thanks jochen and blundell for your suggestions.
LGTM with owners file addition
lgtm with comment https://codereview.chromium.org/1317263004/diff/20001/chrome/browser/page_loa... File chrome/browser/page_load_metrics/page_load_metrics_browsertest.cc (right): https://codereview.chromium.org/1317263004/diff/20001/chrome/browser/page_loa... chrome/browser/page_load_metrics/page_load_metrics_browsertest.cc:13: public: plz add a ctor and dtor https://codereview.chromium.org/1317263004/diff/20001/chrome/browser/page_loa... chrome/browser/page_load_metrics/page_load_metrics_browsertest.cc:22: }; disallow copy/assign
On 2015/09/30 00:56:24, Bryan McQuade wrote: > Wow, this is great. Glad to see this working when moved to chrome/. > > Can we add a per-file owners entry for this new file, to match our owners in the > component? > nit: it can just be an OWNERS file in //chrome/browser/page_load_metrics that's a copy of the one in //components/page_load_metrics. > Thanks jochen and blundell for your suggestions.
The CQ bit was checked by csharrison@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from bmcquade@chromium.org, jochen@chromium.org Link to the patchset: https://codereview.chromium.org/1317263004/#ps60001 (title: "Added ctor/dtor and DISALLOW COPY/ASSIGN to test")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1317263004/60001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1317263004/60001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: win_chromium_x64_rel_ng on tryserver.chromium.win (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.win/builders/win_chromium_x64_...)
The CQ bit was checked by csharrison@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1317263004/60001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1317263004/60001
Message was sent while issue was closed.
Committed patchset #4 (id:60001)
Message was sent while issue was closed.
Patchset 4 (id:??) landed as https://crrev.com/0b7f2aec3f912c47810b3d044eabbc320cbca170 Cr-Commit-Position: refs/heads/master@{#351603}
Message was sent while issue was closed.
A revert of this CL (patchset #4 id:60001) has been created in https://codereview.chromium.org/1376283002/ by jwd@chromium.org. The reason for reverting is: AnchorLink and NewPage tests failing on bots https://build.chromium.org/p/chromium.chromiumos/builders/Linux%20ChromiumOS%.... |