|
|
Created:
4 years ago by Zhongyi Shi Modified:
4 years ago CC:
chromium-reviews, cbentzel+watch_chromium.org Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
DescriptionChange http_server_properties_manager to always persist data to disk after 60s from the receiving update request. All the update requests received during the gap will bundled to the first request.
BUG=670519
Committed: https://crrev.com/3ba27dc9d493c491e6172b6537d44617a22ea81a
Cr-Commit-Position: refs/heads/master@{#439395}
Patch Set 1 #
Total comments: 2
Patch Set 2 : address rch's comments and rebase after unittests has been fixed. #Patch Set 3 : add regression test #
Total comments: 4
Patch Set 4 : Updated patchset dependency #Patch Set 5 : rebase with upstream committed #
Messages
Total messages: 25 (11 generated)
Description was changed from ========== Change http_server_properties_manager to always persist data to disk after 60s from the received update request. All the update requests received during the gap will bundled to the frist request. BUG=670519 ========== to ========== Change http_server_properties_manager to always persist data to disk after 60s from the received update request. All the update requests received during the gap will bundled to the first request. BUG=670519 ==========
zhongyi@chromium.org changed reviewers: + rch@chromium.org, xunjieli@chromium.org
Hi xunjieli@ and rch@, I have verified this new patch works for cronet. Every 60s, all the updates are bundled and written to disk. And at a new startup, all the persisted data get successfully loaded from the disk to properties. PTAL!
Description was changed from ========== Change http_server_properties_manager to always persist data to disk after 60s from the received update request. All the update requests received during the gap will bundled to the first request. BUG=670519 ========== to ========== Change http_server_properties_manager to always persist data to disk after 60s from the receiving update request. All the update requests received during the gap will bundled to the first request. BUG=670519 ==========
On 2016/12/06 21:52:25, Zhongyi Shi wrote: > Hi xunjieli@ and rch@, > > I have verified this new patch works for cronet. Every 60s, all the updates are > bundled and written to disk. And at a new startup, all the persisted data get > successfully loaded from the disk to properties. PTAL! Do we need the same change on |pref_cache_update_timer_|? I haven't looked at the change yet. I think this deserves a regression test. Currently the scheduling logic is not well tested, but it should be.
I agree with xunjieli about tests :) https://codereview.chromium.org/2554723003/diff/1/net/http/http_server_proper... File net/http/http_server_properties_manager.cc (right): https://codereview.chromium.org/2554723003/diff/1/net/http/http_server_proper... net/http/http_server_properties_manager.cc:782: // Cancel the new update if there's one scheduled already. This comment might be clearer if it were: // Do not schedule a new update if there is already one scheduled. https://codereview.chromium.org/2554723003/diff/1/net/http/http_server_proper... net/http/http_server_properties_manager.cc:1037: network_prefs_update_timer_->Stop(); I don't' think this is thread safe since this timer, I thought, was on the network thread but here we're touching it on the prefs thread.
I took a look at file a bit more today. I'd like to propose splitting HttpServerPropertiesManager into two objects -- one that lives on the prefs thread and one that lives on the network thread. There are way too many Update* methods sprinkled in this class. It's easy to mix up UpdateCacheFromPrefsOnPrefsThread, UpdateCacheFromPrefsOnNetworkThread, UpdatePrefsFromCacheOnNetworkThread, UpdatePrefsFromCacheOnNetworkThread and the other variants. The interactions between cache (in-memory copy) and prefs(on-disk copy) are buried in the code complexity. This current setup is hard to understand and error prone (there was also an infinite loop problem which was fixed a while back). If we split the manager into two objects, we have better modularity &encapsulation and can do smarter things with scheduling if desired. Cherie, are you willing to take this on? I'd imagine we could have a simple design doc and start lifting out the thread specific members into its own class.
I agreed that this manager is currently too complex and hides potential bugs. A refactoring to split it in two might be a good idea. However, I also want the Disk Update logic be fixed ASAP so that next cronet release could ship with the fixed Update and cronet consumers could build new versions and re-evaluate the performance for racing cert verification. I am currently writing regression tests, can we get this cl landed first? And we could add your proposal as a follow up. Does that work for you?
On 2016/12/07 19:19:07, Zhongyi Shi wrote: > I agreed that this manager is currently too complex and hides potential bugs. A > refactoring to split it in two might be a good idea. > > However, I also want the Disk Update logic be fixed ASAP so that next cronet > release could ship with the fixed Update and cronet consumers could build new > versions and re-evaluate the performance for racing cert verification. I am > currently writing regression tests, can we get this cl landed first? And we > could add your proposal as a follow up. Does that work for you? SGTM. ScheduleUpdate[Cache]OnPrefThread() has the similar issue in calling timer->Stop() when it shouldn't be. This CL fixes ScheduleUpdate[Prefs]OnNetworkThread(), but not ScheduleUpdate[Cache]OnPrefThread(). There's a parallelism involved between these two (cache vs prefs). I am concerned that we fix one but the other is still broken. But as long as we have tests, I am very happy :)
Patchset #2 (id:20001) has been deleted
Update: Regression test added. I will create one follow up CL to fix persisting data from pref to cache since the bug we created is addressing persisting data to disk. ptal!
The CQ bit was checked by rch@chromium.org
lgtm WOO HOO
The CQ bit was unchecked by commit-bot@chromium.org
This CL has an open dependency (Issue 2577863002 Patch 20001). Please resolve the dependency and try again. If you are sure that there is no real dependency, please use one of the options listed in https://goo.gl/9Es4OR to land the CL.
lgtm. One test suggestion below. https://codereview.chromium.org/2554723003/diff/60001/net/http/http_server_pr... File net/http/http_server_properties_manager_unittest.cc (right): https://codereview.chromium.org/2554723003/diff/60001/net/http/http_server_pr... net/http/http_server_properties_manager_unittest.cc:676: ExpectPrefsUpdateRepeatedly(); I know in the test you asserted pref_task_runner_ does not have task posted. To make the test expectation tighter, can we assert that this called exactly 2 times (rather than using WillRepeately())? https://codereview.chromium.org/2554723003/diff/60001/net/http/http_server_pr... net/http/http_server_properties_manager_unittest.cc:677: ExpectScheduleUpdatePrefsOnNetworkThreadRepeatedly(); Same here. Can we assert that this called exactly 3 times?
https://codereview.chromium.org/2554723003/diff/60001/net/http/http_server_pr... File net/http/http_server_properties_manager_unittest.cc (right): https://codereview.chromium.org/2554723003/diff/60001/net/http/http_server_pr... net/http/http_server_properties_manager_unittest.cc:676: ExpectPrefsUpdateRepeatedly(); On 2016/12/16 15:48:08, xunjieli wrote: > I know in the test you asserted pref_task_runner_ does not have task posted. To > make the test expectation tighter, can we assert that this called exactly 2 > times (rather than using WillRepeately())? Thanks a lot. Since the change your proposed will apply to all the unittests. I will do that in a separate cl. https://codereview.chromium.org/2554723003/diff/60001/net/http/http_server_pr... net/http/http_server_properties_manager_unittest.cc:677: ExpectScheduleUpdatePrefsOnNetworkThreadRepeatedly(); On 2016/12/16 15:48:08, xunjieli wrote: > Same here. Can we assert that this called exactly 3 times? This is might be hard. Some of the tests are calling the same method with different args different times. If we want to enforce checking on how many times this method is called, we will also need to specify the argument, which is defined in the .h file.
The CQ bit was checked by zhongyi@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from rch@chromium.org, xunjieli@chromium.org Link to the patchset: https://codereview.chromium.org/2554723003/#ps100001 (title: "rebase with upstream committed")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
CQ is committing da patch. Bot data: {"patchset_id": 100001, "attempt_start_ts": 1482112679398970, "parent_rev": "1983199ae8bbe581ac92cff0453c9a0c2f4bd123", "commit_rev": "6bc087f1c80b4cda3ed7759761d969ad7b51b3bb"}
Message was sent while issue was closed.
Description was changed from ========== Change http_server_properties_manager to always persist data to disk after 60s from the receiving update request. All the update requests received during the gap will bundled to the first request. BUG=670519 ========== to ========== Change http_server_properties_manager to always persist data to disk after 60s from the receiving update request. All the update requests received during the gap will bundled to the first request. BUG=670519 Review-Url: https://codereview.chromium.org/2554723003 ==========
Message was sent while issue was closed.
Committed patchset #5 (id:100001)
Message was sent while issue was closed.
Description was changed from ========== Change http_server_properties_manager to always persist data to disk after 60s from the receiving update request. All the update requests received during the gap will bundled to the first request. BUG=670519 Review-Url: https://codereview.chromium.org/2554723003 ========== to ========== Change http_server_properties_manager to always persist data to disk after 60s from the receiving update request. All the update requests received during the gap will bundled to the first request. BUG=670519 Committed: https://crrev.com/3ba27dc9d493c491e6172b6537d44617a22ea81a Cr-Commit-Position: refs/heads/master@{#439395} ==========
Message was sent while issue was closed.
Patchset 5 (id:??) landed as https://crrev.com/3ba27dc9d493c491e6172b6537d44617a22ea81a Cr-Commit-Position: refs/heads/master@{#439395} |