|
|
Created:
7 years, 1 month ago by eustas Modified:
6 years, 11 months ago CC:
chromium-reviews Base URL:
svn://svn.chromium.org/chrome/trunk/src Visibility:
Public. |
DescriptionMake output concise by default.
BUG=
Committed: https://src.chromium.org/viewvc/chrome?view=rev&revision=244137
Patch Set 1 #Patch Set 2 : Further cleanup console output #
Total comments: 2
Patch Set 3 : Addressed comments #Patch Set 4 : verbose option #Patch Set 5 : #Patch Set 6 : Cleanup #Messages
Total messages: 17 (0 generated)
This isn't enough to parallelize this process because because there are png files with the same names in different dirs. I instead added -r option to reduce the time, so execution time is no longer issue (at least for me)
On 2013/11/15 22:02:18, oshima wrote: > This isn't enough to parallelize this process because because there are png > files with the same names in different dirs. > I instead added -r option to reduce the time, so execution time is no longer > issue (at least for me) All processing is done into $TMP_DIR that is created by mktemp. So there will be no name clashes. I agree that -r is very useful, but it is not very convenient to use it with WebKit.
-s option helps much in cases when all files should be processed. E. g. if compression pipeline has been improved.
LGTM with a nit, but please wait for oshima's re-review. https://codereview.chromium.org/60733028/diff/40001/tools/resources/optimize-... File tools/resources/optimize-png-files.sh (right): https://codereview.chromium.org/60733028/diff/40001/tools/resources/optimize-... tools/resources/optimize-png-files.sh:93: echo "-d0" nit: s/echo/info/? Should all remaining 'echo' usages be changed to 'info'?
https://codereview.chromium.org/60733028/diff/40001/tools/resources/optimize-... File tools/resources/optimize-png-files.sh (right): https://codereview.chromium.org/60733028/diff/40001/tools/resources/optimize-... tools/resources/optimize-png-files.sh:93: echo "-d0" On 2013/11/27 17:14:16, msw wrote: > nit: s/echo/info/? Should all remaining 'echo' usages be changed to 'info'? Here echo is used to pass return value. This function is always invoked like $(get_color_depth_list), so its output will never be printed out. I've replaced echo with info when output is not-essential, so when multiple instanced work in parallel output will be clear (one line per processed file). There are some "gray" areas - error reporting. Currently I left them untouched, so user will see if something goes wrong.
If this is just to spawn a command, I'd rather want to redirect output a file instead of ignoring output. We also include the result in CL to know how much it helped. Can you instead do this? optimize-png-files -p a/b/c x/y/z spawns the process for a/b/c and x/y/z like this (-q suppress throbber) optimize-png-files -q a/b/c > /tmp/a_b_c_summary.log optimize-png-files -q x/y/z > /tmp/x_y_z_summary.log (let error go to stdout so that people can notice errors if any) wait on jobs and print the summary for each directories when they finished.
On 2013/12/03 01:42:10, oshima wrote: > Can you instead do this? > > optimize-png-files -p a/b/c x/y/z > > spawns the process for a/b/c and x/y/z like this (-q suppress throbber) It was the first approach I've tried. Unfortunately, I've faced 2 problems: 1) under-parallelism (in case we spawn process for directory) or over-parallelism (if we spawn process per file) 2) bad scheduling. We should process large files first, otherwise we will wait for them a long time... There is no "executors" service in shell script, so we are to take care on scheduling. (Hmmm... may be rewrite script to Python ;-) My current workflow is: 1) collect list of png files 2) sort them by size (large first) 3) xargs list to optimize-png-files 4) use git to collect new file sizes (may be using list form (2) would be better) I've reduced output to single line, so for each processed file I can see results in console.
On 2013/12/03 14:58:40, eustas wrote: > On 2013/12/03 01:42:10, oshima wrote: > > Can you instead do this? > > > > optimize-png-files -p a/b/c x/y/z > > > > spawns the process for a/b/c and x/y/z like this (-q suppress throbber) > > It was the first approach I've tried. > Unfortunately, I've faced 2 problems: > 1) under-parallelism (in case we spawn process for directory) or > over-parallelism (if we spawn process per file) > 2) bad scheduling. We should process large files first, otherwise > we will wait for them a long time... > > There is no "executors" service in shell script, so we are to take > care on scheduling. (Hmmm... may be rewrite script to Python ;-) > > My current workflow is: > 1) collect list of png files > 2) sort them by size (large first) > 3) xargs list to optimize-png-files > 4) use git to collect new file sizes (may be using list form (2) would be > better) Which files in the repository are you trying to optimize? I really think you should optimize files that are necessary. It's much faster than processing all using multiple processes. I do run the script in parallel, but by directory is good enough for me) > > I've reduced output to single line, so for each processed file I can see results > in console.
Hello. I'm trying to optimize "all" files (in ALL_DIRS, defined in optimize-png-files.sh), because I'm experimenting with different optimizations. I agree, that optimizing only necessary files is a times times better that paralleling. But from user point of view, optimization should complete as fast as possible. ("Why does optimization takes so long, while CPU load is 1.5%") Eugene Klyuchnikov | SW Engineer | eustas@google.com | Fight fire with fire! On Fri, Dec 6, 2013 at 1:05 AM, <oshima@chromium.org> wrote: > On 2013/12/03 14:58:40, eustas wrote: > >> On 2013/12/03 01:42:10, oshima wrote: >> > Can you instead do this? >> > >> > optimize-png-files -p a/b/c x/y/z >> > >> > spawns the process for a/b/c and x/y/z like this (-q suppress throbber) >> > > It was the first approach I've tried. >> Unfortunately, I've faced 2 problems: >> 1) under-parallelism (in case we spawn process for directory) or >> over-parallelism (if we spawn process per file) >> 2) bad scheduling. We should process large files first, otherwise >> we will wait for them a long time... >> > > There is no "executors" service in shell script, so we are to take >> care on scheduling. (Hmmm... may be rewrite script to Python ;-) >> > > My current workflow is: >> 1) collect list of png files >> 2) sort them by size (large first) >> 3) xargs list to optimize-png-files >> 4) use git to collect new file sizes (may be using list form (2) would be >> better) >> > > Which files in the repository are you trying to optimize? > I really think you should optimize files that are necessary. > It's much faster than processing all using multiple processes. > I do run the script in parallel, but by directory is good enough for me) > > > > I've reduced output to single line, so for each processed file I can see >> > results > >> in console. >> > > > https://codereview.chromium.org/60733028/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2013/12/12 13:09:54, eustas.com wrote: > Hello. > > I'm trying to optimize "all" files (in ALL_DIRS, defined in > optimize-png-files.sh), because I'm experimenting with different > optimizations. > > I agree, that optimizing only necessary files is a times times better that > paralleling. > But from user point of view, optimization should complete as fast as > possible. I'm running this for all files for each branch, and am not expecting anyone run thsi for ALL_DIRS. Contributors can run this when they land png files, but that's optional. (Doesn't mean you shouldn't run this though) How about just adding a flag to suppress throbber messages? We can just redirect the final results and that'd be useful for me too. > ("Why does optimization takes so long, while CPU load is 1.5%") > > > > > Eugene Klyuchnikov | SW Engineer | mailto:eustas@google.com | Fight fire with > fire! > > > On Fri, Dec 6, 2013 at 1:05 AM, <mailto:oshima@chromium.org> wrote: > > > On 2013/12/03 14:58:40, eustas wrote: > > > >> On 2013/12/03 01:42:10, oshima wrote: > >> > Can you instead do this? > >> > > >> > optimize-png-files -p a/b/c x/y/z > >> > > >> > spawns the process for a/b/c and x/y/z like this (-q suppress throbber) > >> > > > > It was the first approach I've tried. > >> Unfortunately, I've faced 2 problems: > >> 1) under-parallelism (in case we spawn process for directory) or > >> over-parallelism (if we spawn process per file) > >> 2) bad scheduling. We should process large files first, otherwise > >> we will wait for them a long time... > >> > > > > There is no "executors" service in shell script, so we are to take > >> care on scheduling. (Hmmm... may be rewrite script to Python ;-) > >> > > > > My current workflow is: > >> 1) collect list of png files > >> 2) sort them by size (large first) > >> 3) xargs list to optimize-png-files > >> 4) use git to collect new file sizes (may be using list form (2) would be > >> better) > >> > > > > Which files in the repository are you trying to optimize? > > I really think you should optimize files that are necessary. > > It's much faster than processing all using multiple processes. > > I do run the script in parallel, but by directory is good enough for me) > > > > > > > > I've reduced output to single line, so for each processed file I can see > >> > > results > > > >> in console. > >> > > > > > > https://codereview.chromium.org/60733028/ > > > > To unsubscribe from this group and stop receiving emails from it, send an email > to mailto:chromium-reviews+unsubscribe@chromium.org.
OK. Only throbber is supperssed by -s option. PTAL
On 2013/12/19 12:38:39, eustas wrote: > OK. > > Only throbber is supperssed by -s option. > > PTAL Sorry, I'm late (has been ooo). I'll look at it this week.
I was probably not clear. I uploaded what I had in mind. It's basically same as your original CL, with excpetion that I renamed option to -v (verbose option), disabled showing progress by default, but still prints the results. If you're ok with this, then lgtm. Please update the CL description to match the new behavior.
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/eustas@chromium.org/60733028/280001
Message was sent while issue was closed.
Change committed as 244137 |