|
|
Chromium Code Reviews|
Created:
4 years, 5 months ago by gabadie Modified:
4 years, 2 months ago CC:
chromium-reviews, mikecase+watch_chromium.org, gabadie+watch_chromium.org, jbudorick+watch_chromium.org, lizeb+watch-android-loading_chromium.org Base URL:
https://chromium.googlesource.com/chromium/src.git@af00 Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
Descriptionsandwich: Use cachetool's batch mode to speed-up cache processing.
Before Sandwich was forking a cachetool process for all modification
of the cache directory but was slow.
This CL fixes issue by using cachetool brand new batch mode that
process cnn.com's home page's cache archive under 3s versus 491s
originally.
BUG=582080
Patch Set 1 #
Total comments: 22
Patch Set 2 : Avoid dead lock #Patch Set 3 : Fixes a bug in OnlineCacheBackend._PushEnqueuedInsts() #Patch Set 4 : Makes os.{read,write} non blocking #
Total comments: 3
Patch Set 5 : Adds comment around EAGAIN #Patch Set 6 : Use Popen.communicate() #Patch Set 7 : s/Online/Batch #
Depends on Patchset: Messages
Total messages: 21 (3 generated)
gabadie@chromium.org changed reviewers: + mattcary@chromium.org
Hey Matt, Prototype of the python side that use cachetool's online mode to speed up cache processing. Please give your feedback (offline?).
https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... File tools/android/loading/chrome_cache.py (right): https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:33: Simple = 'simple' Constants should be all caps: SIMPLE, BLOCKFILE https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:247: """Takes care of reading and deleting cached keys. Add that this can be used as a context manager now, and what it does (ie, suppress exceptions). https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:276: return [k.strip() for k in self._CachetoolCmd('list_keys').split('\n')[:-2]] This -2 is a magic constant that's a little worrisome. Should make it a descriptive constant instead, and possibly assert that the output of cachetool is expected. https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:383: del exc_type, exc_val, exc_tb Shouldn't you return True if you're going to suppress the exception? https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:386: class OnlineCacheBackend(object): Class comment, including context manager semantics. https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:414: self.Stop(force_stop=exc_type == CacheBackendError) return True? https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:495: self._cachetool_process.stdin.write(inst_code) This is dangerous. If the subprocess stdout buffer fills up, stdin.write will block waiting for that buffer to fill, and you'll deadlock. The easy solution is to use Popen.communicate, but that won't work in your case as you want to keep the process alive for multiple stdin/stdout roundtrips. The way I've gotten around this is to have a read thread that buffers data. Then _UnpackResult will read that buffer instead. You need to have locks etc. A pain, certainly, but deadlocks are very difficult to identify and so we should avoid known deadlock patterns even if it seems unlikely that we'll fill up the pipe buffers. https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/commo... File tools/android/loading/common_util.py (right): https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/commo... tools/android/loading/common_util.py:154: def TimeMeasurement(measurement_name): If this isn't used in this patch, make a separate CL for it. Then, eg, if the online cache CL is rolled back you won't accidentally break anything that has started to use TimeMeasurement.
Thanks! Uploaded a new revision. PTAL. https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... File tools/android/loading/chrome_cache.py (right): https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:33: Simple = 'simple' On 2016/07/01 12:09:29, mattcary wrote: > Constants should be all caps: SIMPLE, BLOCKFILE Done. https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:247: """Takes care of reading and deleting cached keys. On 2016/07/01 12:09:29, mattcary wrote: > Add that this can be used as a context manager now, and what it does (ie, > suppress exceptions). Done. https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:276: return [k.strip() for k in self._CachetoolCmd('list_keys').split('\n')[:-2]] On 2016/07/01 12:09:29, mattcary wrote: > This -2 is a magic constant that's a little worrisome. Should make it a > descriptive constant instead, and possibly assert that the output of cachetool > is expected. Oh this change is because of a impl change in cachetool. Added comment. https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:383: del exc_type, exc_val, exc_tb On 2016/07/01 12:09:29, mattcary wrote: > Shouldn't you return True if you're going to suppress the exception? return False (the implicit return None was already enough) https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:386: class OnlineCacheBackend(object): On 2016/07/01 12:09:29, mattcary wrote: > Class comment, including context manager semantics. Done. https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:414: self.Stop(force_stop=exc_type == CacheBackendError) On 2016/07/01 12:09:29, mattcary wrote: > return True? Not really. The point here is to just stop cachetool in any cases, not to catch and process errors. The point of del is to just mark the argument as unused so that pylint is happy. https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:495: self._cachetool_process.stdin.write(inst_code) On 2016/07/01 12:09:29, mattcary wrote: > This is dangerous. If the subprocess stdout buffer fills up, stdin.write will > block waiting for that buffer to fill, and you'll deadlock. > > The easy solution is to use Popen.communicate, but that won't work in your case > as you want to keep the process alive for multiple stdin/stdout roundtrips. > > The way I've gotten around this is to have a read thread that buffers data. Then > _UnpackResult will read that buffer instead. You need to have locks etc. > > A pain, certainly, but deadlocks are very difficult to identify and so we should > avoid known deadlock patterns even if it seems unlikely that we'll fill up the > pipe buffers. Good catch, I always forgot that python buffer stuf in subprocess piped. Managed it without a thread by using the old fashion pipes. https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/commo... File tools/android/loading/common_util.py (right): https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/commo... tools/android/loading/common_util.py:154: def TimeMeasurement(measurement_name): On 2016/07/01 12:09:29, mattcary wrote: > If this isn't used in this patch, make a separate CL for it. Then, eg, if the > online cache CL is rolled back you won't accidentally break anything that has > started to use TimeMeasurement. Done.
https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... File tools/android/loading/chrome_cache.py (right): https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:414: self.Stop(force_stop=exc_type == CacheBackendError) On 2016/07/01 14:10:25, gabadie wrote: > On 2016/07/01 12:09:29, mattcary wrote: > > return True? > > Not really. The point here is to just stop cachetool in any cases, not to catch > and process errors. The point of del is to just mark the argument as unused so > that pylint is happy. I think prefixing the variables with an underscore is enough to make pylint happy. That's a little more clear than the del's, I think. https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:495: self._cachetool_process.stdin.write(inst_code) On 2016/07/01 14:10:25, gabadie wrote: > On 2016/07/01 12:09:29, mattcary wrote: > > This is dangerous. If the subprocess stdout buffer fills up, stdin.write will > > block waiting for that buffer to fill, and you'll deadlock. > > > > The easy solution is to use Popen.communicate, but that won't work in your > case > > as you want to keep the process alive for multiple stdin/stdout roundtrips. > > > > The way I've gotten around this is to have a read thread that buffers data. > Then > > _UnpackResult will read that buffer instead. You need to have locks etc. > > > > A pain, certainly, but deadlocks are very difficult to identify and so we > should > > avoid known deadlock patterns even if it seems unlikely that we'll fill up the > > pipe buffers. > > Good catch, I always forgot that python buffer stuf in subprocess piped. Managed > it without a thread by using the old fashion pipes. Don't system pipes have a finite buffer as well?
https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... File tools/android/loading/chrome_cache.py (right): https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:414: self.Stop(force_stop=exc_type == CacheBackendError) On 2016/07/01 14:30:21, mattcary wrote: > On 2016/07/01 14:10:25, gabadie wrote: > > On 2016/07/01 12:09:29, mattcary wrote: > > > return True? > > > > Not really. The point here is to just stop cachetool in any cases, not to > catch > > and process errors. The point of del is to just mark the argument as unused so > > that pylint is happy. > > I think prefixing the variables with an underscore is enough to make pylint > happy. That's a little more clear than the del's, I think. The del ... # unused. is a common pattern in chromium: https://cs.chromium.org/search/?q=file:.py$+del.*unused&sq=package:chromium&t... https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:495: self._cachetool_process.stdin.write(inst_code) On 2016/07/01 14:30:21, mattcary wrote: > On 2016/07/01 14:10:25, gabadie wrote: > > On 2016/07/01 12:09:29, mattcary wrote: > > > This is dangerous. If the subprocess stdout buffer fills up, stdin.write > will > > > block waiting for that buffer to fill, and you'll deadlock. > > > > > > The easy solution is to use Popen.communicate, but that won't work in your > > case > > > as you want to keep the process alive for multiple stdin/stdout roundtrips. > > > > > > The way I've gotten around this is to have a read thread that buffers data. > > Then > > > _UnpackResult will read that buffer instead. You need to have locks etc. > > > > > > A pain, certainly, but deadlocks are very difficult to identify and so we > > should > > > avoid known deadlock patterns even if it seems unlikely that we'll fill up > the > > > pipe buffers. > > > > Good catch, I always forgot that python buffer stuf in subprocess piped. > Managed > > it without a thread by using the old fashion pipes. > > Don't system pipes have a finite buffer as well? Of course they do, but using os.write() allows me to know how much bytes were written in the pipe, and we doing a _UnpackResult() that is the only place I do a os.read() that is non blocking, I continue pushing stuf I was not able to push before in the mean time I got all the data I want to read.
On 2016/07/01 14:35:39, gabadie wrote: > https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... > File tools/android/loading/chrome_cache.py (right): > > https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... > tools/android/loading/chrome_cache.py:414: self.Stop(force_stop=exc_type == > CacheBackendError) > On 2016/07/01 14:30:21, mattcary wrote: > > On 2016/07/01 14:10:25, gabadie wrote: > > > On 2016/07/01 12:09:29, mattcary wrote: > > > > return True? > > > > > > Not really. The point here is to just stop cachetool in any cases, not to > > catch > > > and process errors. The point of del is to just mark the argument as unused > so > > > that pylint is happy. > > > > I think prefixing the variables with an underscore is enough to make pylint > > happy. That's a little more clear than the del's, I think. > > The del ... # unused. is a common pattern in chromium: > https://cs.chromium.org/search/?q=file:.py$+del.*unused&sq=package:chromium&t... > > https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... > tools/android/loading/chrome_cache.py:495: > self._cachetool_process.stdin.write(inst_code) > On 2016/07/01 14:30:21, mattcary wrote: > > On 2016/07/01 14:10:25, gabadie wrote: > > > On 2016/07/01 12:09:29, mattcary wrote: > > > > This is dangerous. If the subprocess stdout buffer fills up, stdin.write > > will > > > > block waiting for that buffer to fill, and you'll deadlock. > > > > > > > > The easy solution is to use Popen.communicate, but that won't work in your > > > case > > > > as you want to keep the process alive for multiple stdin/stdout > roundtrips. > > > > > > > > The way I've gotten around this is to have a read thread that buffers > data. > > > Then > > > > _UnpackResult will read that buffer instead. You need to have locks etc. > > > > > > > > A pain, certainly, but deadlocks are very difficult to identify and so we > > > should > > > > avoid known deadlock patterns even if it seems unlikely that we'll fill up > > the > > > > pipe buffers. > > > > > > Good catch, I always forgot that python buffer stuf in subprocess piped. > > Managed > > > it without a thread by using the old fashion pipes. > > > > Don't system pipes have a finite buffer as well? > > Of course they do, but using os.write() allows me to know how much bytes were > written in the pipe, and we doing a _UnpackResult() that is the only place I do > a os.read() that is non blocking, I continue pushing stuf I was not able to push > before in the mean time I got all the data I want to read. Discussed offline, os.read and os.write needs to be set as non blocking first. Good catch! Uploaded the new revision addressing this issue.
lgtm Thanks for bearing with the unix file minutia. https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... File tools/android/loading/chrome_cache.py (right): https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:414: self.Stop(force_stop=exc_type == CacheBackendError) On 2016/07/01 14:35:39, gabadie wrote: > On 2016/07/01 14:30:21, mattcary wrote: > > On 2016/07/01 14:10:25, gabadie wrote: > > > On 2016/07/01 12:09:29, mattcary wrote: > > > > return True? > > > > > > Not really. The point here is to just stop cachetool in any cases, not to > > catch > > > and process errors. The point of del is to just mark the argument as unused > so > > > that pylint is happy. > > > > I think prefixing the variables with an underscore is enough to make pylint > > happy. That's a little more clear than the del's, I think. > > The del ... # unused. is a common pattern in chromium: > https://cs.chromium.org/search/?q=file:.py$+del.*unused&sq=package:chromium&t... Fair enough, thanks. https://codereview.chromium.org/2112013003/diff/60001/tools/android/loading/c... File tools/android/loading/chrome_cache.py (right): https://codereview.chromium.org/2112013003/diff/60001/tools/android/loading/c... tools/android/loading/chrome_cache.py:603: if error.errno != errno.EAGAIN: Add comment that EAGAIN is what we get when a read would have blocked as that may not be obvious.
gabadie@chromium.org changed reviewers: + pasko@chromium.org
Thanks Matt for the review! Adding Egor. https://codereview.chromium.org/2112013003/diff/60001/tools/android/loading/c... File tools/android/loading/chrome_cache.py (right): https://codereview.chromium.org/2112013003/diff/60001/tools/android/loading/c... tools/android/loading/chrome_cache.py:603: if error.errno != errno.EAGAIN: On 2016/07/01 20:24:24, mattcary wrote: > Add comment that EAGAIN is what we get when a read would have blocked as that > may not be obvious. Done.
without looking into overall patch, see below for proposal to avoid fcntl and
os.{read,write}.
if not possible, please explain
https://codereview.chromium.org/2112013003/diff/60001/tools/android/loading/c...
File tools/android/loading/chrome_cache.py (right):
https://codereview.chromium.org/2112013003/diff/60001/tools/android/loading/c...
tools/android/loading/chrome_cache.py:603: if error.errno != errno.EAGAIN:
checking for EAGAIN and not for EWOULDBLOCK? That's not portable as per read(2).
How does it work on MacOS?
os.read and os.write are low-level API that is traditionally error-prone, as
programmers tend to forget these extra EINTR, EAGAIN and EWOULDBLOCK.
Also, portability of fcntl heavily depends on its parameters, using it makes it
too easy to mess up and make something unportable.
Can you use higher level API like Popen.communicate()? If this is less optimal,
I would be eager to know how much less optimal it is.
https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... File tools/android/loading/chrome_cache.py (right): https://codereview.chromium.org/2112013003/diff/1/tools/android/loading/chrom... tools/android/loading/chrome_cache.py:495: self._cachetool_process.stdin.write(inst_code) On 2016/07/01 14:35:39, gabadie wrote: > Of course they do, but using os.write() allows me to know how much bytes were > written in the pipe, and we doing a _UnpackResult() that is the only place I do > a os.read() that is non blocking, I continue pushing stuf I was not able to push > before in the mean time I got all the data I want to read. just noticed this discussion. Can you explain more why this would not work: 1. fill up all requests to cachetool 2. Popen.communicate() it and get a response 3. repeat if necessary will that launch cachetool too many times?
> > > will that launch cachetool too many times? > > Yes, that's exactly the issue. All of the speedup is due to only launching cachetool once, so we need to keep the process up. In the past I've dealt with the deadlock issue by spawning a separate read thread to constantly empty out the pipe so the child process won't block & deadlock. Guillaume suggested the nonblocking read instead, it seem to work okay. The logic is easier to follow, but one does have to use the low-level read. I thought that windows & mac python reads had a layer that was posix-compatible, so we'd get the same exception in the would-have-blocked case. But I agree the cross-platform stuff is hard. > https://codereview.chromium.org/2112013003/ > -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On Jul 5, 2016 09:43, "Matthew Cary" <mattcary@google.com> wrote: >> >> >> will that launch cachetool too many times? >> > Yes, that's exactly the issue. All of the speedup is due to only launching cachetool once, so we need to keep the process up. how many starts per url are there? my estimation: 2, once to read all headers, once to make all changes in the cache. How much slower is it than the current proposal? > In the past I've dealt with the deadlock issue by spawning a separate read thread to constantly empty out the pipe so the child process won't block & deadlock. Guillaume suggested the nonblocking read instead, it seem to work okay. The logic is easier to follow, but one does have to use the low-level read. > > I thought that windows & mac python reads had a layer that was posix-compatible, so we'd get the same exception in the would-have-blocked case. But I agree the cross-platform stuff is hard. > > >> >> https://codereview.chromium.org/2112013003/ > > -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
As it says in the CL description, 3 seconds vs 491 seconds. I think it's starting more that twice :O On Tue, Jul 5, 2016 at 10:27 AM, Egor Pasko <pasko@google.com> wrote: > > On Jul 5, 2016 09:43, "Matthew Cary" <mattcary@google.com> wrote: > >> > >> > >> will that launch cachetool too many times? > >> > > Yes, that's exactly the issue. All of the speedup is due to only > launching cachetool once, so we need to keep the process up. > > how many starts per url are there? > > my estimation: 2, once to read all headers, once to make all changes in > the cache. How much slower is it than the current proposal? > > > In the past I've dealt with the deadlock issue by spawning a separate > read thread to constantly empty out the pipe so the child process won't > block & deadlock. Guillaume suggested the nonblocking read instead, it seem > to work okay. The logic is easier to follow, but one does have to use the > low-level read. > > > > I thought that windows & mac python reads had a layer that was > posix-compatible, so we'd get the same exception in the would-have-blocked > case. But I agree the cross-platform stuff is hard. > > > > > >> > >> https://codereview.chromium.org/2112013003/ > > > > > -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
making it start twice is more maintainable On Jul 5, 2016 10:30, "Matthew Cary" <mattcary@google.com> wrote: > As it says in the CL description, 3 seconds vs 491 seconds. I think it's > starting more that twice :O > > On Tue, Jul 5, 2016 at 10:27 AM, Egor Pasko <pasko@google.com> wrote: > >> >> On Jul 5, 2016 09:43, "Matthew Cary" <mattcary@google.com> wrote: >> >> >> >> >> >> will that launch cachetool too many times? >> >> >> > Yes, that's exactly the issue. All of the speedup is due to only >> launching cachetool once, so we need to keep the process up. >> >> how many starts per url are there? >> >> my estimation: 2, once to read all headers, once to make all changes in >> the cache. How much slower is it than the current proposal? >> >> > In the past I've dealt with the deadlock issue by spawning a separate >> read thread to constantly empty out the pipe so the child process won't >> block & deadlock. Guillaume suggested the nonblocking read instead, it seem >> to work okay. The logic is easier to follow, but one does have to use the >> low-level read. >> > >> > I thought that windows & mac python reads had a layer that was >> posix-compatible, so we'd get the same exception in the would-have-blocked >> case. But I agree the cross-platform stuff is hard. >> > >> > >> >> >> >> https://codereview.chromium.org/2112013003/ >> > >> > >> > > -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
OK. I'll defer to Guillaume on that. On Tue, Jul 5, 2016 at 10:36 AM, Egor Pasko <pasko@google.com> wrote: > making it start twice is more maintainable > On Jul 5, 2016 10:30, "Matthew Cary" <mattcary@google.com> wrote: > >> As it says in the CL description, 3 seconds vs 491 seconds. I think it's >> starting more that twice :O >> >> On Tue, Jul 5, 2016 at 10:27 AM, Egor Pasko <pasko@google.com> wrote: >> >>> >>> On Jul 5, 2016 09:43, "Matthew Cary" <mattcary@google.com> wrote: >>> >> >>> >> >>> >> will that launch cachetool too many times? >>> >> >>> > Yes, that's exactly the issue. All of the speedup is due to only >>> launching cachetool once, so we need to keep the process up. >>> >>> how many starts per url are there? >>> >>> my estimation: 2, once to read all headers, once to make all changes in >>> the cache. How much slower is it than the current proposal? >>> >>> > In the past I've dealt with the deadlock issue by spawning a separate >>> read thread to constantly empty out the pipe so the child process won't >>> block & deadlock. Guillaume suggested the nonblocking read instead, it seem >>> to work okay. The logic is easier to follow, but one does have to use the >>> low-level read. >>> > >>> > I thought that windows & mac python reads had a layer that was >>> posix-compatible, so we'd get the same exception in the would-have-blocked >>> case. But I agree the cross-platform stuff is hard. >>> > >>> > >>> >> >>> >> https://codereview.chromium.org/2112013003/ >>> > >>> > >>> >> >> -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2016/07/05 08:36:42, chromium-reviews wrote: > making it start twice is more maintainable > On Jul 5, 2016 10:30, "Matthew Cary" <mailto:mattcary@google.com> wrote: > > > As it says in the CL description, 3 seconds vs 491 seconds. I think it's > > starting more that twice :O > > > > On Tue, Jul 5, 2016 at 10:27 AM, Egor Pasko <mailto:pasko@google.com> wrote: > > > >> > >> On Jul 5, 2016 09:43, "Matthew Cary" <mailto:mattcary@google.com> wrote: > >> >> > >> >> > >> >> will that launch cachetool too many times? > >> >> > >> > Yes, that's exactly the issue. All of the speedup is due to only > >> launching cachetool once, so we need to keep the process up. > >> > >> how many starts per url are there? > >> > >> my estimation: 2, once to read all headers, once to make all changes in > >> the cache. How much slower is it than the current proposal? > >> > >> > In the past I've dealt with the deadlock issue by spawning a separate > >> read thread to constantly empty out the pipe so the child process won't > >> block & deadlock. Guillaume suggested the nonblocking read instead, it seem > >> to work okay. The logic is easier to follow, but one does have to use the > >> low-level read. > >> > > >> > I thought that windows & mac python reads had a layer that was > >> posix-compatible, so we'd get the same exception in the would-have-blocked > >> case. But I agree the cross-platform stuff is hard. > >> > > >> > > >> >> > >> >> https://codereview.chromium.org/2112013003/ > >> > > >> > > >> > > > > > > -- > You received this message because you are subscribed to the Google Groups > "Chromium-reviews" group. > To unsubscribe from this group and stop receiving emails from it, send an email > to mailto:chromium-reviews+unsubscribe@chromium.org. I am confused, why twice?
On 2016/07/06 09:26:48, gabadie wrote: > I am confused, why twice? that was my wild guess: once to read all headers, and second time to perform all mutations on the cache.
On 2016/07/06 15:03:43, pasko wrote: > On 2016/07/06 09:26:48, gabadie wrote: > > I am confused, why twice? > > that was my wild guess: once to read all headers, and second time to perform all > mutations on the cache. Same speed using Popen.communicate(), with one cachetool launch. However functions such as ListKeys() GetSize() launches a new cachetool which is worst than before. PTAL.
Description was changed from ========== sandwich: Use cachetool's online mode to speed-up cache processing. Before Sandwich was forking a cachetool process for all modification of the cache directory but was slow. This CL fixes issue by using cachetool brand new online mode that process cnn.com's home page's cache archive under 3s versus 491s originally. BUG=582080 ========== to ========== sandwich: Use cachetool's batch mode to speed-up cache processing. Before Sandwich was forking a cachetool process for all modification of the cache directory but was slow. This CL fixes issue by using cachetool brand new batch mode that process cnn.com's home page's cache archive under 3s versus 491s originally. BUG=582080 ========== |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
