Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(212)

Issue 2610463002: Use TaskScheduler instead of WorkerPool in nss_util.cc. (Closed)

Created:
3 years, 11 months ago by fdoray
Modified:
3 years, 10 months ago
Reviewers:
gab, Ryan Sleevi
CC:
chromium-reviews, davidben
Target Ref:
refs/pending/heads/master
Project:
chromium
Visibility:
Public.

Description

Use TaskScheduler instead of WorkerPool in nss_util.cc. The following traits are used: Priority: Inherited (default) The priority is inherited from the calling context (i.e. TaskTraits are initialized with the priority of the current task). Shutdown behavior: CONTINUE_ON_SHUTDOWN Tasks posted with this mode which have not started executing before shutdown is initiated will never run. Tasks with this mode running at shutdown will be ignored (the worker will not be joined). Note: Tasks that were previously posted to base::WorkerPool should use this shutdown behavior because this is how base::WorkerPool handles all its tasks. May Block: Tasks posted with MayBlock() may block. This includes but is not limited to tasks that wait on synchronous file I/O operations: read or write a file from disk, interact with a pipe or a socket, rename or delete a file, enumerate files in a directory, etc. This trait isn't required for the mere use of locks. BUG=659191

Patch Set 1 #

Total comments: 1
Unified diffs Side-by-side diffs Delta from patch set Stats (+19 lines, -25 lines) Patch
M crypto/nss_util.cc View 4 chunks +19 lines, -25 lines 1 comment Download

Messages

Total messages: 24 (9 generated)
fdoray
PTAL. WorkerPool is being deprecated in favor of TaskScheduler.
3 years, 11 months ago (2017-01-01 19:24:05 UTC) #6
davidben
+rsleevi is probably a better person to review this.
3 years, 11 months ago (2017-01-03 13:12:52 UTC) #8
Ryan Sleevi
This is one of those changes that Gab and I had discussed as benefiting from/needing ...
3 years, 11 months ago (2017-01-03 19:03:47 UTC) #9
fdoray
On 2017/01/03 19:03:47, Ryan Sleevi wrote: > This is one of those changes that Gab ...
3 years, 11 months ago (2017-01-04 16:30:22 UTC) #10
gab
On 2017/01/04 16:30:22, fdoray wrote: > On 2017/01/03 19:03:47, Ryan Sleevi wrote: > > This ...
3 years, 11 months ago (2017-01-11 16:13:57 UTC) #13
Ryan Sleevi
On 2017/01/11 16:13:57, gab wrote: > Right, @rsleevi: this is different from all the work ...
3 years, 11 months ago (2017-01-11 18:17:44 UTC) #14
gab
On 2017/01/11 18:17:44, Ryan Sleevi wrote: > On 2017/01/11 16:13:57, gab wrote: > > Right, ...
3 years, 11 months ago (2017-01-11 18:57:38 UTC) #15
Ryan Sleevi
On 2017/01/11 18:57:38, gab wrote: > When using base::PostTask, the work is posted in parallel, ...
3 years, 11 months ago (2017-01-11 20:29:42 UTC) #16
gab
On 2017/01/11 20:29:42, Ryan Sleevi wrote: > On 2017/01/11 18:57:38, gab wrote: > > When ...
3 years, 11 months ago (2017-01-11 21:29:10 UTC) #17
Ryan Sleevi
On 2017/01/11 21:29:10, gab wrote: > AFAIK, waiting on another thread is an anti-pattern in ...
3 years, 11 months ago (2017-01-11 21:36:35 UTC) #18
gab
On 2017/01/11 21:36:35, Ryan Sleevi wrote: > On 2017/01/11 21:29:10, gab wrote: > > AFAIK, ...
3 years, 11 months ago (2017-01-11 21:55:05 UTC) #19
Ryan Sleevi
On 2017/01/11 21:55:05, gab wrote: > I'm not sure what you mean by "this" and ...
3 years, 11 months ago (2017-01-11 22:13:33 UTC) #20
gab
Thanks for the extra context. The issues with base::Thread you mention are no more: non-joinable ...
3 years, 11 months ago (2017-01-12 15:44:50 UTC) #21
Ryan Sleevi
On 2017/01/12 15:44:50, gab wrote: > But I'm still unsure why your task must run ...
3 years, 11 months ago (2017-01-12 19:11:46 UTC) #23
gab
3 years, 11 months ago (2017-01-13 01:18:27 UTC) #24
On 2017/01/12 19:11:46, Ryan Sleevi wrote:
> On 2017/01/12 15:44:50, gab wrote:
> > But I'm still unsure why your task must run right away. I understand the
> > reentrancy problem but (1) you only ever post one of these tasks at a time
(as
> > highlighted in comment below) and (2) the reentrancy problem is only present
> > when the task is running, not while it's in the queue, so why is it more
> > important to execute it right away (with respect to all other Chromium
tasks)?
> 
> Ah, perhaps the disconnect is I'm discussing any NSS-using functions being
> dispatched to the worker pool if they cause the issue. That is, intentionally,
> rather than making a giant omnibus object that tried to manage all NSS
> interactions, individual places that run into this issue use the
> base::WorkerPool. That is, we treat it like an OS library that should be
> 'understood' (for some value of understood), as it is part of the LSB, and
> instead treat it at the callsites.
> 
> Put differently, the task here is not just sequenced with initializing the TPM
-
> it's sequenced with all other NSS tasks that need to be sequenced like this.
>  
> > As for "participating in Lucky Luke prioritization and reordering" I'm not
> sure
> > what you mean, but running on any thread is participating in some kind of
> > prioritization (i.e. Lucky Luke or the kernel's) and reordering wise you're
> > posting in parallel to WorkerPool so you have no ordering guarantees
> already..?
> 
> It seems my attempts to simplify the explanation (since I think we're largely
> just not on the same page yet) may be adding to the conflict. Here again, the
> concept I was trying to communicate is "We need this task to run, and never be
> sequenced against/waiting on any of the other NSS tasks that have been
posted".
> 
> As I understand Lucky Luke, independent sequences may be reordered or
> deprioritized, while tasks within a sequence may block on an earlier task
> completing. Both are behaviours we don't want here.

Ok, then I guess that, as much as I think we both dislike the current design,
the only solution is to move WorkerPool (POSIX only) to //net once we're done
migrating all other callsites to keep status quo for NSS tasks.

We'll be happy to revisit this design with you later if you wish. For example,
we could provide a TaskTraits that guarantees a new thread for tasks with it if
all worker threads are currently busy (which is seldom the case so overall you'd
avoid creating new threads in the majority of cases). We're not tempted to
provide that just yet though by fear of abuse while the API is brand new.

Thanks for your time going through this complicated issue.

>  
> > If your task is crucial to respond to user queries then you mark it with
> > TaskPriority::USER_BLOCKING and it will run shortly (most likely sooner than
> if
> > you created a new thread in the vast majority of cases).
> 
> I'm afraid I'm not communicating well, because that's both not the issue here
> and still has the major downsides I've been talking about.

Powered by Google App Engine
This is Rietveld 408576698