|
|
Created:
6 years, 8 months ago by hidehiko Modified:
6 years, 8 months ago Reviewers:
Mark Seaborn, dmichael (off chromium), dmichael(do not use this one), jln (very slow on Chromium) CC:
chromium-reviews, hamaji, mazda, Junichi Uekawa, teravest, jam Base URL:
svn://svn.chromium.org/chrome/trunk/src Visibility:
Public. |
DescriptionAdd IPC Channel for new ManifestService.
This CL adds a new IPC Channel between NaCl plugin and the renderer process
with introducing ManifestService (in the plugin) and ManifestServiceChannel
(in the renderer) as its end points.
Currently, ManifestService is just an empty service. Its functions will be
added in following CLs. The service will be used only for non-SFI mode
as a first step. On other platforms, IPC Channel will not be created.
TEST=Ran trybots.
BUG=358431
Committed: https://src.chromium.org/viewvc/chrome?view=rev&revision=264477
Patch Set 1 : #
Total comments: 13
Patch Set 2 : #
Total comments: 4
Patch Set 3 : #
Total comments: 2
Patch Set 4 : Rebase #Patch Set 5 : #Patch Set 6 : #Patch Set 7 : #Messages
Total messages: 33 (0 generated)
Hi Mark, Dave, Thank you for review in advance, - hidehiko
Julien, could you review nacl_messages.h and nacl_host_messages.h? Thanks, - hidehiko
https://codereview.chromium.org/231793003/diff/20001/components/nacl/renderer... File components/nacl/renderer/ppb_nacl_private_impl.cc (right): https://codereview.chromium.org/231793003/diff/20001/components/nacl/renderer... components/nacl/renderer/ppb_nacl_private_impl.cc:108: struct LaunchSelLdrCompletionCallbackData : A brief description of what the class does might be useful. https://codereview.chromium.org/231793003/diff/20001/components/nacl/renderer... components/nacl/renderer/ppb_nacl_private_impl.cc:111: LaunchSelLdrCompletionCallbackData(int num_expect_call) nit: explicit https://codereview.chromium.org/231793003/diff/20001/components/nacl/renderer... components/nacl/renderer/ppb_nacl_private_impl.cc:115: int num_remaining_call; nit: call->calls https://codereview.chromium.org/231793003/diff/20001/components/nacl/renderer... components/nacl/renderer/ppb_nacl_private_impl.cc:124: void LaunchSelLdrCompletionCallback( suggestion: It might simplify things a tiny bit to make this a member on the class. It also might be good to name this function more specifically... it's called in response to a channel being connected, so maybe ChannelConnectedCallback https://codereview.chromium.org/231793003/diff/20001/ppapi/nacl_irt/embedder_... File ppapi/nacl_irt/embedder_service.cc (right): https://codereview.chromium.org/231793003/diff/20001/ppapi/nacl_irt/embedder_... ppapi/nacl_irt/embedder_service.cc:19: io_message_loop, It looks like you're creating the SyncChannel on the IO thread. That means its base (ChannelProxy) will have the IO thread for its "listener" thread *and* its "IPC" thread. That sounds wrong... I would expect that sync messages you send on this channel will deadlock.
Thank you for review. PTAL? https://codereview.chromium.org/231793003/diff/20001/components/nacl/renderer... File components/nacl/renderer/ppb_nacl_private_impl.cc (right): https://codereview.chromium.org/231793003/diff/20001/components/nacl/renderer... components/nacl/renderer/ppb_nacl_private_impl.cc:108: struct LaunchSelLdrCompletionCallbackData : On 2014/04/10 18:06:23, dmichael wrote: > A brief description of what the class does might be useful. Done. https://codereview.chromium.org/231793003/diff/20001/components/nacl/renderer... components/nacl/renderer/ppb_nacl_private_impl.cc:111: LaunchSelLdrCompletionCallbackData(int num_expect_call) On 2014/04/10 18:06:23, dmichael wrote: > nit: explicit Acknowledged. https://codereview.chromium.org/231793003/diff/20001/components/nacl/renderer... components/nacl/renderer/ppb_nacl_private_impl.cc:115: int num_remaining_call; On 2014/04/10 18:06:23, dmichael wrote: > nit: call->calls Done. https://codereview.chromium.org/231793003/diff/20001/components/nacl/renderer... components/nacl/renderer/ppb_nacl_private_impl.cc:124: void LaunchSelLdrCompletionCallback( On 2014/04/10 18:06:23, dmichael wrote: > suggestion: It might simplify things a tiny bit to make this a member on the > class. > > It also might be good to name this function more specifically... it's called in > response to a channel being connected, so maybe ChannelConnectedCallback Good idea. How about this? https://codereview.chromium.org/231793003/diff/20001/ppapi/nacl_irt/embedder_... File ppapi/nacl_irt/embedder_service.cc (right): https://codereview.chromium.org/231793003/diff/20001/ppapi/nacl_irt/embedder_... ppapi/nacl_irt/embedder_service.cc:19: io_message_loop, On 2014/04/10 18:06:23, dmichael wrote: > It looks like you're creating the SyncChannel on the IO thread. That means its > base (ChannelProxy) will have the IO thread for its "listener" thread *and* its > "IPC" thread. That sounds wrong... I would expect that sync messages you send > on this channel will deadlock. My understanding is the IPC thread and listenter thread can be shared. Especially in this case, we have NULL listener so, it performs almost nothing as "listener" thread. As for sync message, SyncMessage should be sent from the thread other than "NaCl IO thread" via filter_. The filter_ listens the reply on "IPC" thread, and once it is found, the filter_ unblocks the caller thread without passing through "listener" thread (Again though "IPC" and "listener" thread are shared in this case). So, this should be safe?
Unfortunately no-one in the security team is familiar with the NaCl IPC. I'll let someone who knows this area better review this and I can do a sanity check and approve afterwards. If I can help by looking at something in particular, please let me know.
https://codereview.chromium.org/231793003/diff/20001/ppapi/nacl_irt/embedder_... File ppapi/nacl_irt/embedder_service.cc (right): https://codereview.chromium.org/231793003/diff/20001/ppapi/nacl_irt/embedder_... ppapi/nacl_irt/embedder_service.cc:19: io_message_loop, On 2014/04/10 19:02:08, hidehiko wrote: > On 2014/04/10 18:06:23, dmichael wrote: > > It looks like you're creating the SyncChannel on the IO thread. That means its > > base (ChannelProxy) will have the IO thread for its "listener" thread *and* > its > > "IPC" thread. That sounds wrong... I would expect that sync messages you send > > on this channel will deadlock. > > My understanding is the IPC thread and listenter thread can be shared. I don't think we ever do that today, so it depends on what you mean. I made an experimental CL to DCHECK if you pass the same thread for both to see if we ever do it today: https://codereview.chromium.org/234253002/ ...my intuition is that it was never considered when making that class that those 2 threads might be the same, and it's my guess that nobody currently does this. > Especially in this case, we have NULL listener so, it performs almost nothing as > "listener" thread. If: 1) You *never* send from the listener thread and 2) |listener| is forever NULL ...then I *think* it will probably work just fine. But I think it's outside of the design consideration of ChannelProxy and SyncChannel, so it's a little risky. We'll see how that DCHECK change fares on the trybots. I think at a minimum we would want to add a DCHECK that either the threads are different, or listener is NULL (or both). > > As for sync message, SyncMessage should be sent from the thread other than "NaCl > IO thread" via filter_. The filter_ listens the reply on "IPC" thread, and once > it is found, the filter_ unblocks the caller thread without passing through > "listener" thread (Again though "IPC" and "listener" thread are shared in this > case). > > So, this should be safe? > https://codereview.chromium.org/231793003/diff/40001/components/nacl/renderer... File components/nacl/renderer/ppb_nacl_private_impl.cc (right): https://codereview.chromium.org/231793003/diff/40001/components/nacl/renderer... components/nacl/renderer/ppb_nacl_private_impl.cc:112: // of expected invocations and callback. When all channels are established, nit: "a" callback here, "the given callback" below. https://codereview.chromium.org/231793003/diff/40001/components/nacl/renderer... components/nacl/renderer/ppb_nacl_private_impl.cc:114: // PP_OK, if all the connections are successfully establishe. Otherwise, nit: no comma after PP_OK, also establishe->established
Thank you for review. Julian, thank you for reply. It sounds good to me. Please let me ask you OWNER review for *_messages.h files, when Dave and Mark give LGTM to this CL. Dave, could you take another look? https://codereview.chromium.org/231793003/diff/20001/ppapi/nacl_irt/embedder_... File ppapi/nacl_irt/embedder_service.cc (right): https://codereview.chromium.org/231793003/diff/20001/ppapi/nacl_irt/embedder_... ppapi/nacl_irt/embedder_service.cc:19: io_message_loop, Thank you for pointing this out. So, I've looked into ChannelProxy and SyncChannel again, and now I found that SyncChannel is designed as "ChannelProxy + sync-message support." So, if the problem is around sync message, the DCHECK should be in the SyncChannel? Indeed, for example, if listener wait the message somehow, the IOThread is locked and something would go wrong, but it sounds the responsibility of the implementer of such a listener, rather than the responsibility of the ChannelProxy's design. In other words, ChannelProxy works fine if listener does not lock the thread, even if the "listener" thread is shared with "IPC" thread (this includes the case if listener == NULL). On the other hand, on SyncChannel, "listener" thread should not be shared with "IPC" thread, because SyncChannel is designed to support sync-message, which means the "listener" thread would be locked intentionally and it would cause deadlock. WDYT? Along with the idea, I've changed this implementation to use ChannelProxy directly, instead of SyncChannel. Could you take a look at it, too? On 2014/04/10 23:14:38, dmichael wrote: > On 2014/04/10 19:02:08, hidehiko wrote: > > On 2014/04/10 18:06:23, dmichael wrote: > > > It looks like you're creating the SyncChannel on the IO thread. That means > its > > > base (ChannelProxy) will have the IO thread for its "listener" thread *and* > > its > > > "IPC" thread. That sounds wrong... I would expect that sync messages you > send > > > on this channel will deadlock. > > > > My understanding is the IPC thread and listenter thread can be shared. > I don't think we ever do that today, so it depends on what you mean. I made an > experimental CL to DCHECK if you pass the same thread for both to see if we ever > do it today: > https://codereview.chromium.org/234253002/ > ...my intuition is that it was never considered when making that class that > those 2 threads might be the same, and it's my guess that nobody currently does > this. > > > Especially in this case, we have NULL listener so, it performs almost nothing > as > > "listener" thread. > If: > 1) You *never* send from the listener thread > and > 2) |listener| is forever NULL > ...then I *think* it will probably work just fine. But I think it's outside of > the design consideration of ChannelProxy and SyncChannel, so it's a little > risky. > > We'll see how that DCHECK change fares on the trybots. I think at a minimum we > would want to add a DCHECK that either the threads are different, or listener is > NULL (or both). > > > > > As for sync message, SyncMessage should be sent from the thread other than > "NaCl > > IO thread" via filter_. The filter_ listens the reply on "IPC" thread, and > once > > it is found, the filter_ unblocks the caller thread without passing through > > "listener" thread (Again though "IPC" and "listener" thread are shared in this > > case). > > > > So, this should be safe? > > > https://codereview.chromium.org/231793003/diff/40001/components/nacl/renderer... File components/nacl/renderer/ppb_nacl_private_impl.cc (right): https://codereview.chromium.org/231793003/diff/40001/components/nacl/renderer... components/nacl/renderer/ppb_nacl_private_impl.cc:112: // of expected invocations and callback. When all channels are established, On 2014/04/10 23:14:38, dmichael wrote: > nit: "a" callback here, "the given callback" below. Thanks. Done. https://codereview.chromium.org/231793003/diff/40001/components/nacl/renderer... components/nacl/renderer/ppb_nacl_private_impl.cc:114: // PP_OK, if all the connections are successfully establishe. Otherwise, On 2014/04/10 23:14:38, dmichael wrote: > nit: no comma after PP_OK, also establishe->established Done.
lgtm https://codereview.chromium.org/231793003/diff/20001/ppapi/nacl_irt/embedder_... File ppapi/nacl_irt/embedder_service.cc (right): https://codereview.chromium.org/231793003/diff/20001/ppapi/nacl_irt/embedder_... ppapi/nacl_irt/embedder_service.cc:19: io_message_loop, On 2014/04/11 16:28:30, hidehiko wrote: > Thank you for pointing this out. > > So, I've looked into ChannelProxy and SyncChannel again, and now I found that > SyncChannel is designed as "ChannelProxy + sync-message support." So, if the > problem is around sync message, the DCHECK should be in the SyncChannel? > Indeed, for example, if listener wait the message somehow, the IOThread is > locked and something would go wrong, but it sounds the responsibility of the > implementer of such a listener, rather than the responsibility of the > ChannelProxy's design. > In other words, ChannelProxy works fine if listener does not lock the thread, > even if the "listener" thread is shared with "IPC" thread (this includes the Yes, that's right. I still think the two shouldn't usually be shared, even if it will work. In general, the default shouldn't be for messages to be handled on the IO thread, in case they take too long. We don't want the IO thread to be too busy. We have the "AddFilter" method for cases where you want/need messages to arrive on the IO thread, and that makes it more obvious what's going on. I think if I adjust my DCHECK to allow them to be the same if the Listener is NULL, I think that's strong enough for what I'm concerned about and still is consistent with your usage. (It turns out that NaClProcessHost does the same thing as you; it creates the ChannelProxy on the IO thread, but has a NULL Listener). I'll pursue checking in that CL. > case if listener == NULL). On the other hand, on SyncChannel, "listener" thread > should not be shared with "IPC" thread, because SyncChannel is designed to > support sync-message, which means the "listener" thread would be locked > intentionally and it would cause deadlock. WDYT? Yes, that's the problem I was most concerned about. > > Along with the idea, I've changed this implementation to use ChannelProxy > directly, instead of SyncChannel. Could you take a look at it, too? Interestingly, SyncMessageFilter's documentation implies it is used with a SyncChannel, but I can't think of any technical reason for that. I think your approach sounds reasonable. Doing it that way should guarantee that we never send a Sync message from the IO thread, which is the real danger. It helps that SyncMessageFilter DCHECKs for that: https://code.google.com/p/chromium/codesearch#chromium/src/ipc/ipc_sync_messa...
Sorry, not lgtm until we sort out the thread issue. I brought it up with jam (CCed), and he's of the opinion that we shouldn't ever make a ChannelProxy whose Listener thread is the same as its IO thread. It's also unusual to use ChannelProxy instead of SyncChannel with SyncMessageFilter (although like I said it probably does work). jam's suggestion for the existing place where we have a ChannelProxy with Listener thread == IO thread is that we use Channel instead. I think that makes sense there, but not here. Since you need SyncMessageFilter, you definitely need the Filter support provided by ChannelProxy (SyncChannel preferrably). Can you create EmbedderService (or at least its SyncChannel) on a different thread? Is the problem that you don't have a thread where we'll pump the message loop?
On 2014/04/11 22:21:52, dmichael wrote: > Sorry, not lgtm until we sort out the thread issue. > > I brought it up with jam (CCed), and he's of the opinion that we shouldn't ever > make a ChannelProxy whose Listener thread is the same as its IO thread. It's > also unusual to use ChannelProxy instead of SyncChannel with SyncMessageFilter > (although like I said it probably does work). > > jam's suggestion for the existing place where we have a ChannelProxy with > Listener thread == IO thread is that we use Channel instead. > > I think that makes sense there, but not here. Since you need SyncMessageFilter, > you definitely need the Filter support provided by ChannelProxy (SyncChannel > preferrably). Can you create EmbedderService (or at least its SyncChannel) on a > different thread? Is the problem that you don't have a thread where we'll pump > the message loop? I think I can do this easily, but before moving this forward, I'd like to clarify that; is it ok to create a brand new thread (probably with a message pump) which is just only for the listener (== NULL) of the EmbedderService's IPC Channel? In other words, the thread works only for the creation of the IPC Channel, then is suspended and will never be awake afterwords. Also, IIRC, you said you didn't want to introduce a new thread as we already have so many threads, so I just wonder if this is acceptable for you.
I'm not keen on the name "EmbedderService". All of the NaCl-related code in the Chromium repo collectively implements an embedding of NaCl. If we call this "EmbedderService", then when we refer to an "embedding of NaCl", I suspect people might say "oh, do you mean this EmbedderService here?". Also, the Non-SFI Mode implementation isn't actually an embedder of NaCl, in the sense that it doesn't use code from native_client (except in some trivial cases). Can we call this something more concrete, like "ManifestService" (and "manifest_renderer_channel_handle"), since this is going to be used for open_resource(), for handling manifest files? https://codereview.chromium.org/231793003/diff/60001/components/nacl/renderer... File components/nacl/renderer/embedder_service_channel.cc (right): https://codereview.chromium.org/231793003/diff/60001/components/nacl/renderer... components/nacl/renderer/embedder_service_channel.cc:33: // TODO(hidehiko): Impelement StartCompleted and OpenResource. "Implement"
Thank you for review, Mark. On 2014/04/14 17:10:47, Mark Seaborn wrote: > I'm not keen on the name "EmbedderService". All of the NaCl-related code in the > Chromium repo collectively implements an embedding of NaCl. If we call this > "EmbedderService", then when we refer to an "embedding of NaCl", I suspect > people might say "oh, do you mean this EmbedderService here?". Also, the > Non-SFI Mode implementation isn't actually an embedder of NaCl, in the sense > that it doesn't use code from native_client (except in some trivial cases). > > Can we call this something more concrete, like "ManifestService" (and > "manifest_renderer_channel_handle"), since this is going to be used for > open_resource(), for handling manifest files? > I see. Just for clarification, the service will handle StartCompleted, too, as I commented in TODO. Does it make sense for you to make StartCompleted a part of ManifestService? Dave, do you agree with the Mark's suggestion? (As I received different suggestions from Mark and Dave, I'll change the name when we made agreement). Dave, also, could you reply to my question in the previous reply? Thanks, - hidehiko > https://codereview.chromium.org/231793003/diff/60001/components/nacl/renderer... > File components/nacl/renderer/embedder_service_channel.cc (right): > > https://codereview.chromium.org/231793003/diff/60001/components/nacl/renderer... > components/nacl/renderer/embedder_service_channel.cc:33: // TODO(hidehiko): > Impelement StartCompleted and OpenResource. > "Implement"
On 14 April 2014 10:16, <hidehiko@chromium.org> wrote: > On 2014/04/14 17:10:47, Mark Seaborn wrote: > >> I'm not keen on the name "EmbedderService". All of the NaCl-related code >> in the > > Chromium repo collectively implements an embedding of NaCl. If we call >> this >> "EmbedderService", then when we refer to an "embedding of NaCl", I suspect >> people might say "oh, do you mean this EmbedderService here?". Also, the >> Non-SFI Mode implementation isn't actually an embedder of NaCl, in the >> sense >> that it doesn't use code from native_client (except in some trivial >> cases). >> > > Can we call this something more concrete, like "ManifestService" (and >> "manifest_renderer_channel_handle"), since this is going to be used for >> open_resource(), for handling manifest files? >> > > > I see. Just for clarification, the service will handle StartCompleted, too, > as I commented in TODO. Does it make sense for you to make StartCompleted > a part > of ManifestService? What is StartCompleted? I can't see any StartCompleted in the codebase currently. This new channel is really an "everything except PPAPI" channel. We could try to find a name to reflect that, e.g. "nonppapi_channel", but naming something after what it's not isn't a great naming scheme. :-) Cheers, Mark To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2014/04/14 17:27:26, Mark Seaborn wrote: > On 14 April 2014 10:16, <mailto:hidehiko@chromium.org> wrote: > > > On 2014/04/14 17:10:47, Mark Seaborn wrote: > > > >> I'm not keen on the name "EmbedderService". All of the NaCl-related code > >> in the > > > > Chromium repo collectively implements an embedding of NaCl. If we call > >> this > >> "EmbedderService", then when we refer to an "embedding of NaCl", I suspect > >> people might say "oh, do you mean this EmbedderService here?". Also, the > >> Non-SFI Mode implementation isn't actually an embedder of NaCl, in the > >> sense > >> that it doesn't use code from native_client (except in some trivial > >> cases). > >> > > > > Can we call this something more concrete, like "ManifestService" (and > >> "manifest_renderer_channel_handle"), since this is going to be used for > >> open_resource(), for handling manifest files? > >> > > > > > > I see. Just for clarification, the service will handle StartCompleted, too, > > as I commented in TODO. Does it make sense for you to make StartCompleted > > a part > > of ManifestService? > > > What is StartCompleted? I can't see any StartCompleted in the codebase > currently. It will be the non-SFI version of StartupInitializationComplete, which is a part of reverse service in SFI mode. It is needed to support otherwise the renderer's main thread is blocked by the sync-call of GetInterfaces from the renderer to plugin until the plugin starts to PPAPI. As open_resource() needs the renderer's main thread working, it sometimes causes the deadlock (timing issue). > > This new channel is really an "everything except PPAPI" channel. We could > try to find a name to reflect that, e.g. "nonppapi_channel", but naming > something after what it's not isn't a great naming scheme. :-) Hmm... to be honest, I don't have much idea... Something like "NaClSystemChannel", "SystemMessageChannel", "PrivateMessageChannel", etc.? (Though, all sound saying nothing to me... Thanks, - hidehiko > > Cheers, > Mark > > To unsubscribe from this group and stop receiving emails from it, send an email > to mailto:chromium-reviews+unsubscribe@chromium.org.
On 14 April 2014 10:46, <hidehiko@chromium.org> wrote: > On 2014/04/14 17:27:26, Mark Seaborn wrote: > >> On 14 April 2014 10:16, <mailto:hidehiko@chromium.org> wrote: >> > > > On 2014/04/14 17:10:47, Mark Seaborn wrote: >> > >> >> I'm not keen on the name "EmbedderService". All of the NaCl-related >> code >> >> in the >> > >> > Chromium repo collectively implements an embedding of NaCl. If we call >> >> this >> >> "EmbedderService", then when we refer to an "embedding of NaCl", I >> suspect >> >> people might say "oh, do you mean this EmbedderService here?". Also, >> the >> >> Non-SFI Mode implementation isn't actually an embedder of NaCl, in the >> >> sense >> >> that it doesn't use code from native_client (except in some trivial >> >> cases). >> >> >> > >> > Can we call this something more concrete, like "ManifestService" (and >> >> "manifest_renderer_channel_handle"), since this is going to be used >> for >> >> open_resource(), for handling manifest files? >> >> >> > >> > >> > I see. Just for clarification, the service will handle StartCompleted, >> too, >> > as I commented in TODO. Does it make sense for you to make >> StartCompleted >> > a part >> > of ManifestService? >> > > > What is StartCompleted? I can't see any StartCompleted in the codebase >> currently. >> > > It will be the non-SFI version of StartupInitializationComplete, > which is a part of reverse service in SFI mode. > It is needed to support otherwise the renderer's main thread is blocked by > the sync-call of GetInterfaces from the renderer to plugin until the plugin > starts to PPAPI. As open_resource() needs the renderer's main thread > working, > it sometimes causes the deadlock (timing issue). OK. In that case, it sounds like we're only doing the synchronisation in order to make open_resource() work. If the synchronisation is only necessary to support manifest files, there's no inconsistency with doing it on a channel called "manifest channel". > This new channel is really an "everything except PPAPI" channel. We could >> try to find a name to reflect that, e.g. "nonppapi_channel", but naming >> something after what it's not isn't a great naming scheme. :-) >> > > Hmm... to be honest, I don't have much idea... Something like > "NaClSystemChannel", > "SystemMessageChannel", "PrivateMessageChannel", etc.? (Though, all sound > saying > nothing to me... "System" is OK -- it's vague, but better than "embedder" IMHO since it would be less misleading. "Private" is not so good -- all of these channels are private in the sense that the messages are not exposed to user code, so "private" would have the same problem as "embedder". I'd still go with "manifest channel", I think. Cheers, Mark To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
I'm fine with ManifestService/ManifestChannel or something. Alternatively, IRTRendererChannel, IRTServiceChannel, or something. On Mon, Apr 14, 2014 at 12:25 PM, Mark Seaborn <mseaborn@chromium.org>wrote: > On 14 April 2014 10:46, <hidehiko@chromium.org> wrote: > >> On 2014/04/14 17:27:26, Mark Seaborn wrote: >> >>> On 14 April 2014 10:16, <mailto:hidehiko@chromium.org> wrote: >>> >> >> > On 2014/04/14 17:10:47, Mark Seaborn wrote: >>> > >>> >> I'm not keen on the name "EmbedderService". All of the NaCl-related >>> code >>> >> in the >>> > >>> > Chromium repo collectively implements an embedding of NaCl. If we >>> call >>> >> this >>> >> "EmbedderService", then when we refer to an "embedding of NaCl", I >>> suspect >>> >> people might say "oh, do you mean this EmbedderService here?". Also, >>> the >>> >> Non-SFI Mode implementation isn't actually an embedder of NaCl, in the >>> >> sense >>> >> that it doesn't use code from native_client (except in some trivial >>> >> cases). >>> >> >>> > >>> > Can we call this something more concrete, like "ManifestService" (and >>> >> "manifest_renderer_channel_handle"), since this is going to be used >>> for >>> >> open_resource(), for handling manifest files? >>> >> >>> > >>> > >>> > I see. Just for clarification, the service will handle StartCompleted, >>> too, >>> > as I commented in TODO. Does it make sense for you to make >>> StartCompleted >>> > a part >>> > of ManifestService? >>> >> >> >> What is StartCompleted? I can't see any StartCompleted in the codebase >>> currently. >>> >> >> It will be the non-SFI version of StartupInitializationComplete, >> which is a part of reverse service in SFI mode. >> It is needed to support otherwise the renderer's main thread is blocked by >> the sync-call of GetInterfaces from the renderer to plugin until the >> plugin >> starts to PPAPI. As open_resource() needs the renderer's main thread >> working, >> it sometimes causes the deadlock (timing issue). > > > OK. In that case, it sounds like we're only doing the synchronisation in > order to make open_resource() work. If the synchronisation is only > necessary to support manifest files, there's no inconsistency with doing it > on a channel called "manifest channel". > > >> This new channel is really an "everything except PPAPI" channel. We could >>> try to find a name to reflect that, e.g. "nonppapi_channel", but naming >>> something after what it's not isn't a great naming scheme. :-) >>> >> >> Hmm... to be honest, I don't have much idea... Something like >> "NaClSystemChannel", >> "SystemMessageChannel", "PrivateMessageChannel", etc.? (Though, all sound >> saying >> nothing to me... > > > "System" is OK -- it's vague, but better than "embedder" IMHO since it > would be less misleading. > > "Private" is not so good -- all of these channels are private in the sense > that the messages are not exposed to user code, so "private" would have the > same problem as "embedder". > > I'd still go with "manifest channel", I think. > > Cheers, > Mark > > To unsubscribe from this group and stop receiving emails from it, send an > email to chromium-reviews+unsubscribe@chromium.org. > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2014/04/14 22:03:44, dmichael wrote: > I'm fine with ManifestService/ManifestChannel or something. Alternatively, > IRTRendererChannel, IRTServiceChannel, or something. Thank you for comment. I'll use one of them. Mark, if you have preference of them, please let me know. As for thread of SyncChannel creation, we still seems to have some disagreed points. Let's have a discussion on the thread for the record. I clarified that what Dave meant was, and he told me that he didn't mean creating a brand new thread, but introducing a limitation that the startup thread is the one where irt_ppapi_start() is called later. (My understanding is: His key point is to avoid introducing a new thread, as we already have too many threads) I asked Mark offline, and he told me that such a limitation is not a good idea. We don't have such a documentation, and user code can call irt_ppapi_start() on the thread which is different from the one where the entry point is called now, actually. What we want to do should be simple, but the situation sounds a bit complicated. Let me try to summarize what we discussed, and I'd like to find a good agreement. First, let me note our requirement again: - The IPC channel works on background (i.e. on IOThread). - The message needs to be sent/received synchronously. (i.e. block the calling thread until the reply arrives from the peer process). That's it. It looks simpler now. For its implementation, we chose SyncMessageFilter + ChannelProxy (or SyncChannel), once. SyncMessageFilter implements what we'd like to do, actually. ChannelProxy (or SyncChannel) is needed to use SyncMessageFilter. The problem here is ChannelProxy is also designed to run IPC::Listener on the thread where the instance is created (let me call it Listener thread hereafter). For that purpose, MessageLoop is needed for the Listener thread. My design does not need Listener, because we'll use it for (non-SFI) open_resource() implementation, and all messages should be handled by SyncMessageFilter. So, as the workaround of this problem, I set Listener NULL, and I shared the Listener thread with IO thread, so that ChannelProxy should work (from the implementation point of view, though it is not explicitly documented). Dave warns to use ChannelProxy, because it is not designed to share the IO thread and Listener thread. Instead, he suggested to create a MessageLoop on startup thread, with introducing a new limitation that the startup thread is the one where irt_ppapi_start() is called later. Mark warns that the limitation is not a good idea. At least it is not documented now. Users can call irt_ppapi_start() on the thread different from the startup's one. Here we are. Then let's move to the solution. Some possible solutions I have: [A] Use ChannelProxy with sharing the thread. I still think this technically works. So maybe just adding some comment to ipc_channel_proxy.h may good enough? For example; // The background thread can be the one of the main thread. (Note that main thread appears in the comment of ChannelProxy. Please see also ipc_channel_proxy.h). [B] Allow listener_task_runner_ is NULL if Listener is NULL. This is probably similar idea to what Dave tried. As we do not need the Listener, posting task to the listener_task_runner_ in ipc_channel_proxy.cc should not make any sense. Now we have no check, so it'd be just crashed. Instead, how about getting rid of posting a task to the Listener thread in such a case? [C] Refactor ChannelProxy somehow. Now, we don't need full function of ChannelProxy. What we need are: 1) Running IPC tasks on background thread. 2) Handling MessageFilter properly. and we do not need, "Handling messages on the created thread", feature. Theoretically, we probably can decouple them. Though, I still wonder if it could be clean APIs (maybe not I'm feeling now, to be honest...) [D] Implement what we need for IPC channel. We may be able to give up to use SyncMessageFilter and sync-IPC-message. Instead, we could manually build sync-like IPC message with async-IPC-messages and some locks (or cond vers etc.). Mark suggested me that: 1) When open_resource() is called, take a lock and post a task to the IO Thread. 2) IOThread sends an async message to the peer process. 3) When peer process has done the task, let it send back another async message to the plugin as reply. 4) When reply arrives to plugin, IOThread again process it, and then unblock the calling thread. This is what we're doing in SFI-mode's irt_open_resource(). [E] Any other options? Note that the key point of our discussion is the problem we may hit in future, rather than what we're facing. So, IMHO, postponing with some small workaround (or comment) would be a good option, because it is very difficult to make an ideal design without a concrete problem we actually hit in this case. Mark, Dave, WDYT? Thanks, - hidehiko > > On Mon, Apr 14, 2014 at 12:25 PM, Mark Seaborn <mailto:mseaborn@chromium.org>wrote: > > > On 14 April 2014 10:46, <mailto:hidehiko@chromium.org> wrote: > > > >> On 2014/04/14 17:27:26, Mark Seaborn wrote: > >> > >>> On 14 April 2014 10:16, <mailto:hidehiko@chromium.org> wrote: > >>> > >> > >> > On 2014/04/14 17:10:47, Mark Seaborn wrote: > >>> > > >>> >> I'm not keen on the name "EmbedderService". All of the NaCl-related > >>> code > >>> >> in the > >>> > > >>> > Chromium repo collectively implements an embedding of NaCl. If we > >>> call > >>> >> this > >>> >> "EmbedderService", then when we refer to an "embedding of NaCl", I > >>> suspect > >>> >> people might say "oh, do you mean this EmbedderService here?". Also, > >>> the > >>> >> Non-SFI Mode implementation isn't actually an embedder of NaCl, in the > >>> >> sense > >>> >> that it doesn't use code from native_client (except in some trivial > >>> >> cases). > >>> >> > >>> > > >>> > Can we call this something more concrete, like "ManifestService" (and > >>> >> "manifest_renderer_channel_handle"), since this is going to be used > >>> for > >>> >> open_resource(), for handling manifest files? > >>> >> > >>> > > >>> > > >>> > I see. Just for clarification, the service will handle StartCompleted, > >>> too, > >>> > as I commented in TODO. Does it make sense for you to make > >>> StartCompleted > >>> > a part > >>> > of ManifestService? > >>> > >> > >> > >> What is StartCompleted? I can't see any StartCompleted in the codebase > >>> currently. > >>> > >> > >> It will be the non-SFI version of StartupInitializationComplete, > >> which is a part of reverse service in SFI mode. > >> It is needed to support otherwise the renderer's main thread is blocked by > >> the sync-call of GetInterfaces from the renderer to plugin until the > >> plugin > >> starts to PPAPI. As open_resource() needs the renderer's main thread > >> working, > >> it sometimes causes the deadlock (timing issue). > > > > > > OK. In that case, it sounds like we're only doing the synchronisation in > > order to make open_resource() work. If the synchronisation is only > > necessary to support manifest files, there's no inconsistency with doing it > > on a channel called "manifest channel". > > > > > >> This new channel is really an "everything except PPAPI" channel. We could > >>> try to find a name to reflect that, e.g. "nonppapi_channel", but naming > >>> something after what it's not isn't a great naming scheme. :-) > >>> > >> > >> Hmm... to be honest, I don't have much idea... Something like > >> "NaClSystemChannel", > >> "SystemMessageChannel", "PrivateMessageChannel", etc.? (Though, all sound > >> saying > >> nothing to me... > > > > > > "System" is OK -- it's vague, but better than "embedder" IMHO since it > > would be less misleading. > > > > "Private" is not so good -- all of these channels are private in the sense > > that the messages are not exposed to user code, so "private" would have the > > same problem as "embedder". > > > > I'd still go with "manifest channel", I think. > > > > Cheers, > > Mark > > > > To unsubscribe from this group and stop receiving emails from it, send an > > email to mailto:chromium-reviews+unsubscribe@chromium.org. > > > > To unsubscribe from this group and stop receiving emails from it, send an email > to mailto:chromium-reviews+unsubscribe@chromium.org.
On 2014/04/14 23:10:49, hidehiko wrote: > On 2014/04/14 22:03:44, dmichael wrote: > > I'm fine with ManifestService/ManifestChannel or something. Alternatively, > > IRTRendererChannel, IRTServiceChannel, or something. > > Thank you for comment. I'll use one of them. Mark, if you have preference of > them, please let me know. > > As for thread of SyncChannel creation, we still seems to have some disagreed > points. > Let's have a discussion on the thread for the record. > > I clarified that what Dave meant was, and he told me that he didn't mean > creating a brand new thread, > but introducing a limitation that the startup thread is the one where > irt_ppapi_start() is called later. > (My understanding is: His key point is to avoid introducing a new thread, as we > already have too many threads) > I asked Mark offline, and he told me that such a limitation is not a good idea. > We don't have such a documentation, and user code can call irt_ppapi_start() on > the thread which is different > from the one where the entry point is called now, actually. > > > What we want to do should be simple, but the situation sounds a bit complicated. > Let me try to summarize what we discussed, and I'd like to find a good > agreement. > > First, let me note our requirement again: > - The IPC channel works on background (i.e. on IOThread). > - The message needs to be sent/received synchronously. (i.e. block the calling > thread until the reply arrives from the peer process). > That's it. It looks simpler now. For its implementation, we chose > SyncMessageFilter + ChannelProxy (or SyncChannel), once. > SyncMessageFilter implements what we'd like to do, actually. ChannelProxy (or > SyncChannel) is needed to use SyncMessageFilter. > > The problem here is ChannelProxy is also designed to run IPC::Listener on the > thread where the instance is created > (let me call it Listener thread hereafter). For that purpose, MessageLoop is > needed for the Listener thread. > My design does not need Listener, because we'll use it for (non-SFI) > open_resource() implementation, and all messages > should be handled by SyncMessageFilter. > So, as the workaround of this problem, I set Listener NULL, and I shared the > Listener thread with IO thread, > so that ChannelProxy should work (from the implementation point of view, though > it is not explicitly documented). > > Dave warns to use ChannelProxy, because it is not designed to share the IO > thread and Listener thread. > Instead, he suggested to create a MessageLoop on startup thread, with > introducing a new limitation that the startup thread is > the one where irt_ppapi_start() is called later. I'm actually not sure where that limitation would come from. Can you elaborate? It's not clear to me how making a new thread would be any better. You can have a message loop and pump it. If user code later calls irt_ppapi_start on a different thread, what will actually go wrong? I think it would have been a problem if we were reusing PluginDispatcher as I had wanted you to do originally, but with the new channel, I don't know of a problem. > > Mark warns that the limitation is not a good idea. At least it is not documented > now. Users can call irt_ppapi_start() on > the thread different from the startup's one. Does anybody actually do that? I doubt it. We could do some testing on a good swath of apps to find out. Do we really want to paint ourselves in to a corner in order to allow this kind of behavior that probably nobody does or needs? > > Here we are. Then let's move to the solution. > > Some possible solutions I have: > [A] Use ChannelProxy with sharing the thread. > I still think this technically works. So maybe just adding some comment to > ipc_channel_proxy.h may good enough? For example; > // The background thread can be the one of the main thread. > (Note that main thread appears in the comment of ChannelProxy. Please see also > ipc_channel_proxy.h). Yeah, I can circle back with jam@ about it. It is *not* the original intended use of ChannelProxy, so it's not really tested and nobody who worked on that code designed it to work that way. Hence why it makes me and jam@ uncomfortable. And if something small changes, like you later need a Listener, it will not work well for you anymore. > > [B] Allow listener_task_runner_ is NULL if Listener is NULL. > This is probably similar idea to what Dave tried. > As we do not need the Listener, posting task to the listener_task_runner_ in > ipc_channel_proxy.cc should not make any sense. > Now we have no check, so it'd be just crashed. Instead, how about getting rid of > posting a task to the Listener thread > in such a case? ChannelProxy already checks listener_ before invoking anything. Messing with the task runner might allow us to skip some (rare) context switches, and would add some more complication to the code. I don't think it really would benefit us much to do that. My DCHECK CL was going to allow using ChannelProxy on the IO thread only if listener_ is NULL; it already can handle a NULL listener_ safely, so that would be the easy way to do what you want for option B. If we decide it's an acceptable usage of ChannelProxy. > > [C] Refactor ChannelProxy somehow. > Now, we don't need full function of ChannelProxy. What we need are: > 1) Running IPC tasks on background thread. > 2) Handling MessageFilter properly. > and we do not need, "Handling messages on the created thread", feature. > Theoretically, we probably can decouple them. Though, I still wonder if it could > be clean APIs > (maybe not I'm feeling now, to be honest...) Hmm, there *might* be something to this. As I mentioned, we have another place already using ChannelProxy in the unintended way (think maybe it's NaClListener) that could benefit from this. And I ran across another piece of code that reimplemented filters. This might be worth thinking about, although we should try not to block you on that kind of refactoring. > > [D] Implement what we need for IPC channel. > We may be able to give up to use SyncMessageFilter and sync-IPC-message. > Instead, we could manually build sync-like IPC message with async-IPC-messages > and some locks (or cond vers etc.). > Mark suggested me that: > 1) When open_resource() is called, take a lock and post a task to the IO Thread. > 2) IOThread sends an async message to the peer process. > 3) When peer process has done the task, let it send back another async message > to the plugin as reply. > 4) When reply arrives to plugin, IOThread again process it, and then unblock the > calling thread. > This is what we're doing in SFI-mode's irt_open_resource(). That's basically what SyncMessageFilter does. It would be a shame to basically rewrite it :( > > [E] Any other options? My current leaning is to go this way: 1) If we can just create the ChannelProxy on the main thread without negative repercussions, do that. 2) If we're really sure (1) is not an option, go ahead with creating on the IO thread for now, and in the background pursue adding a "sanctioned" way to support Filters with a Channel-thing created on the IO thread.
On 2014/04/15 17:08:57, dmichael wrote: > On 2014/04/14 23:10:49, hidehiko wrote: > > On 2014/04/14 22:03:44, dmichael wrote: > > > I'm fine with ManifestService/ManifestChannel or something. Alternatively, > > > IRTRendererChannel, IRTServiceChannel, or something. > > > > Thank you for comment. I'll use one of them. Mark, if you have preference of > > them, please let me know. > > > > As for thread of SyncChannel creation, we still seems to have some disagreed > > points. > > Let's have a discussion on the thread for the record. > > > > I clarified that what Dave meant was, and he told me that he didn't mean > > creating a brand new thread, > > but introducing a limitation that the startup thread is the one where > > irt_ppapi_start() is called later. > > (My understanding is: His key point is to avoid introducing a new thread, as > we > > already have too many threads) > > I asked Mark offline, and he told me that such a limitation is not a good > idea. > > We don't have such a documentation, and user code can call irt_ppapi_start() > on > > the thread which is different > > from the one where the entry point is called now, actually. > > > > > > What we want to do should be simple, but the situation sounds a bit > complicated. > > Let me try to summarize what we discussed, and I'd like to find a good > > agreement. > > > > First, let me note our requirement again: > > - The IPC channel works on background (i.e. on IOThread). > > - The message needs to be sent/received synchronously. (i.e. block the calling > > thread until the reply arrives from the peer process). > > That's it. It looks simpler now. For its implementation, we chose > > SyncMessageFilter + ChannelProxy (or SyncChannel), once. > > SyncMessageFilter implements what we'd like to do, actually. ChannelProxy (or > > SyncChannel) is needed to use SyncMessageFilter. > > > > The problem here is ChannelProxy is also designed to run IPC::Listener on the > > thread where the instance is created > > (let me call it Listener thread hereafter). For that purpose, MessageLoop is > > needed for the Listener thread. > > My design does not need Listener, because we'll use it for (non-SFI) > > open_resource() implementation, and all messages > > should be handled by SyncMessageFilter. > > So, as the workaround of this problem, I set Listener NULL, and I shared the > > Listener thread with IO thread, > > so that ChannelProxy should work (from the implementation point of view, > though > > it is not explicitly documented). > > > > Dave warns to use ChannelProxy, because it is not designed to share the IO > > thread and Listener thread. > > Instead, he suggested to create a MessageLoop on startup thread, with > > introducing a new limitation that the startup thread is > > the one where irt_ppapi_start() is called later. > I'm actually not sure where that limitation would come from. Can you elaborate? > It's not clear to me how making a new thread would be any better. You can have a > message loop and pump it. If user code later calls irt_ppapi_start on a > different thread, what will actually go wrong? I think it would have been a > problem if we were reusing PluginDispatcher as I had wanted you to do > originally, but with the new channel, I don't know of a problem. > So you mean, we should keep the message loop stop during the plugin runs? Under current implementation, we have no way to resume the stopped message loop in general. The only way to resume is resuming it in irt_ppapi_start() somehow. Creating a new thread is a way to keep message loop running. > > > > Mark warns that the limitation is not a good idea. At least it is not > documented > > now. Users can call irt_ppapi_start() on > > the thread different from the startup's one. > Does anybody actually do that? I doubt it. We could do some testing on a good > swath of apps to find out. Do we really want to paint ourselves in to a corner > in order to allow this kind of behavior that probably nobody does or needs? I'm not sure. However, I think it is difficult to ensure almost the NaCl plugins in the world invokes irt_ppapi_start() on the startup thread... To be honest, I still don't understand this limitation is worth enough just for workaround of IPC channel implementation. Moreover, it sounds risky to me... > > > > > Here we are. Then let's move to the solution. > > > > Some possible solutions I have: > > [A] Use ChannelProxy with sharing the thread. > > I still think this technically works. So maybe just adding some comment to > > ipc_channel_proxy.h may good enough? For example; > > // The background thread can be the one of the main thread. > > (Note that main thread appears in the comment of ChannelProxy. Please see also > > ipc_channel_proxy.h). > Yeah, I can circle back with jam@ about it. It is *not* the original intended > use of ChannelProxy, so it's not really tested and nobody who worked on that > code designed it to work that way. Hence why it makes me and jam@ uncomfortable. > And if something small changes, like you later need a Listener, it will not work > well for you anymore. If we need a Listener later, it must be restructured more widely. The reason why we're introducing a new IPC channel is that we cannot use main thread at this point, so that we cannot share it with PPAPI IPC channel (with the renderer). > > > > > [B] Allow listener_task_runner_ is NULL if Listener is NULL. > > This is probably similar idea to what Dave tried. > > As we do not need the Listener, posting task to the listener_task_runner_ in > > ipc_channel_proxy.cc should not make any sense. > > Now we have no check, so it'd be just crashed. Instead, how about getting rid > of > > posting a task to the Listener thread > > in such a case? > ChannelProxy already checks listener_ before invoking anything. Messing with the > task runner might allow us to skip some (rare) context switches, and would add > some more complication to the code. I don't think it really would benefit us > much to do that. > > My DCHECK CL was going to allow using ChannelProxy on the IO thread only if > listener_ is NULL; it already can handle a NULL listener_ safely, so that would > be the easy way to do what you want for option B. If we decide it's an > acceptable usage of ChannelProxy. I meant NULL for listener_task_runner_. Not listener_. Admittedly listener_ is checked on the Listener thread, but it is too late (and it requires Listener thread task runner). I'm proposing how about not-sending the task to the Listener thread if listener_ is already NULL. My proposal is slightly different from yours. Here is the draft. crrev.com/239583002 This CL is not conflicting with your DCHECK CL. > > > > > [C] Refactor ChannelProxy somehow. > > Now, we don't need full function of ChannelProxy. What we need are: > > 1) Running IPC tasks on background thread. > > 2) Handling MessageFilter properly. > > and we do not need, "Handling messages on the created thread", feature. > > Theoretically, we probably can decouple them. Though, I still wonder if it > could > > be clean APIs > > (maybe not I'm feeling now, to be honest...) > Hmm, there *might* be something to this. As I mentioned, we have another place > already using ChannelProxy in the unintended way (think maybe it's NaClListener) > that could benefit from this. And I ran across another piece of code that > reimplemented filters. This might be worth thinking about, although we should > try not to block you on that kind of refactoring. > > > > > [D] Implement what we need for IPC channel. > > We may be able to give up to use SyncMessageFilter and sync-IPC-message. > > Instead, we could manually build sync-like IPC message with async-IPC-messages > > and some locks (or cond vers etc.). > > Mark suggested me that: > > 1) When open_resource() is called, take a lock and post a task to the IO > Thread. > > 2) IOThread sends an async message to the peer process. > > 3) When peer process has done the task, let it send back another async message > > to the plugin as reply. > > 4) When reply arrives to plugin, IOThread again process it, and then unblock > the > > calling thread. > > This is what we're doing in SFI-mode's irt_open_resource(). > That's basically what SyncMessageFilter does. It would be a shame to basically > rewrite it :( > > > > > [E] Any other options? > My current leaning is to go this way: > 1) If we can just create the ChannelProxy on the main thread without negative > repercussions, do that. > 2) If we're really sure (1) is not an option, go ahead with creating on the IO > thread for now, and in the background pursue adding a "sanctioned" way to > support Filters with a Channel-thing created on the IO thread. So, then now I'm thinking that [B] would be a good option? - We do not introduce new limitation to the plugin by this change. - We do not "hack" ChannelProxy by sharing IO thread and Listener thread. WDYT, Dave, Mark?
On Tue, Apr 15, 2014 at 12:17 PM, <hidehiko@chromium.org> wrote: > On 2014/04/15 17:08:57, dmichael wrote: > >> On 2014/04/14 23:10:49, hidehiko wrote: >> > On 2014/04/14 22:03:44, dmichael wrote: >> > > I'm fine with ManifestService/ManifestChannel or something. >> Alternatively, >> > > IRTRendererChannel, IRTServiceChannel, or something. >> > >> > Thank you for comment. I'll use one of them. Mark, if you have >> preference of >> > them, please let me know. >> > >> > As for thread of SyncChannel creation, we still seems to have some >> disagreed >> > points. >> > Let's have a discussion on the thread for the record. >> > >> > I clarified that what Dave meant was, and he told me that he didn't mean >> > creating a brand new thread, >> > but introducing a limitation that the startup thread is the one where >> > irt_ppapi_start() is called later. >> > (My understanding is: His key point is to avoid introducing a new >> thread, as >> we >> > already have too many threads) >> > I asked Mark offline, and he told me that such a limitation is not a >> good >> idea. >> > We don't have such a documentation, and user code can call >> irt_ppapi_start() >> on >> > the thread which is different >> > from the one where the entry point is called now, actually. >> > >> > >> > What we want to do should be simple, but the situation sounds a bit >> complicated. >> > Let me try to summarize what we discussed, and I'd like to find a good >> > agreement. >> > >> > First, let me note our requirement again: >> > - The IPC channel works on background (i.e. on IOThread). >> > - The message needs to be sent/received synchronously. (i.e. block the >> > calling > >> > thread until the reply arrives from the peer process). >> > That's it. It looks simpler now. For its implementation, we chose >> > SyncMessageFilter + ChannelProxy (or SyncChannel), once. >> > SyncMessageFilter implements what we'd like to do, actually. >> ChannelProxy >> > (or > >> > SyncChannel) is needed to use SyncMessageFilter. >> > >> > The problem here is ChannelProxy is also designed to run IPC::Listener >> on >> > the > >> > thread where the instance is created >> > (let me call it Listener thread hereafter). For that purpose, >> MessageLoop is >> > needed for the Listener thread. >> > My design does not need Listener, because we'll use it for (non-SFI) >> > open_resource() implementation, and all messages >> > should be handled by SyncMessageFilter. >> > So, as the workaround of this problem, I set Listener NULL, and I >> shared the >> > Listener thread with IO thread, >> > so that ChannelProxy should work (from the implementation point of view, >> though >> > it is not explicitly documented). >> > >> > Dave warns to use ChannelProxy, because it is not designed to share the >> IO >> > thread and Listener thread. >> > Instead, he suggested to create a MessageLoop on startup thread, with >> > introducing a new limitation that the startup thread is >> > the one where irt_ppapi_start() is called later. >> I'm actually not sure where that limitation would come from. Can you >> > elaborate? > >> It's not clear to me how making a new thread would be any better. You can >> have >> > a > >> message loop and pump it. If user code later calls irt_ppapi_start on a >> different thread, what will actually go wrong? I think it would have been >> a >> problem if we were reusing PluginDispatcher as I had wanted you to do >> originally, but with the new channel, I don't know of a problem. >> > > > So you mean, we should keep the message loop stop during the plugin runs? > Under current implementation, we have no way to resume the stopped message > loop > in general. > The only way to resume is resuming it in irt_ppapi_start() somehow. > Creating a new thread is a way to keep message loop running. If we: 1) We don't ever receive messages via ManifestChannel (or whatever it's named) on that thread. and 2) The plugin calls irt_ppapi_start() from a different thread (thus making that thread the "Pepper main thread") ...then does it matter that the message loop never gets run again? I don't think this is ideal, because it's still counter to how ChannelProxy is supposed to work, but it's probably workable in the near term. > > > > >> > Mark warns that the limitation is not a good idea. At least it is not >> documented >> > now. Users can call irt_ppapi_start() on >> > the thread different from the startup's one. >> Does anybody actually do that? I doubt it. We could do some testing on a >> good >> swath of apps to find out. Do we really want to paint ourselves in to a >> corner >> in order to allow this kind of behavior that probably nobody does or >> needs? >> > > I'm not sure. However, I think it is difficult to ensure almost the NaCl > plugins > in the world > invokes irt_ppapi_start() on the startup thread... > To be honest, I still don't understand this limitation is worth enough > just for > workaround > of IPC channel implementation. Moreover, it sounds risky to me... To me, it sounds risky today if a plugin tries to do it. Have you tried it to see if it works at all in the current implementation to call irt_ppapi_start from a different thread? Using the same thread sounds desirable to me, and I feel like if we'd thought hard enough about it before, we might have explicitly imposed that restriction. As it is, I'm not sure whether the restriction already exists implicitly. If we *do* want to support that kind of strange behavior, I feel like we ought to have a test for it, otherwise we're likely to break it in the future accidentally. All that said, it might not matter for your particular case, since I think nothing will really go wrong if the plugin changes threads and the original message loop doesn't get pumped again. > > > > > >> > Here we are. Then let's move to the solution. >> > >> > Some possible solutions I have: >> > [A] Use ChannelProxy with sharing the thread. >> > I still think this technically works. So maybe just adding some comment >> to >> > ipc_channel_proxy.h may good enough? For example; >> > // The background thread can be the one of the main thread. >> > (Note that main thread appears in the comment of ChannelProxy. Please >> see >> > also > >> > ipc_channel_proxy.h). >> Yeah, I can circle back with jam@ about it. It is *not* the original >> intended >> use of ChannelProxy, so it's not really tested and nobody who worked on >> that >> code designed it to work that way. Hence why it makes me and jam@ >> > uncomfortable. > >> And if something small changes, like you later need a Listener, it will >> not >> > work > >> well for you anymore. >> > > If we need a Listener later, it must be restructured more widely. > The reason why we're introducing a new IPC channel is that we cannot use > main > thread at this point, > so that we cannot share it with PPAPI IPC channel (with the renderer). > > > > > >> > [B] Allow listener_task_runner_ is NULL if Listener is NULL. >> > This is probably similar idea to what Dave tried. >> > As we do not need the Listener, posting task to the >> listener_task_runner_ in >> > ipc_channel_proxy.cc should not make any sense. >> > Now we have no check, so it'd be just crashed. Instead, how about >> getting >> > rid > >> of >> > posting a task to the Listener thread >> > in such a case? >> ChannelProxy already checks listener_ before invoking anything. Messing >> with >> > the > >> task runner might allow us to skip some (rare) context switches, and >> would add >> some more complication to the code. I don't think it really would benefit >> us >> much to do that. >> > > My DCHECK CL was going to allow using ChannelProxy on the IO thread only >> if >> listener_ is NULL; it already can handle a NULL listener_ safely, so that >> > would > >> be the easy way to do what you want for option B. If we decide it's an >> acceptable usage of ChannelProxy. >> > > I meant NULL for listener_task_runner_. Not listener_. > Admittedly listener_ is checked on the Listener thread, but it is too late > (and > it requires Listener thread task runner). > I'm proposing how about not-sending the task to the Listener thread if > listener_ > is already NULL. > I understood what you meant to do to ChannelProxy. I think it wasn't clear to me what your goal was or how it was better than just using a NULL listener_. Now I think you're saying that if you did it that way, you could avoid creating a MessageLoop prior to creating the ChannelProxy on that thread? I guess maybe you're worried about posting tasks that never run, if that message loop is not pumped again after the task is posted? I think in the short term we don't need to worry about that, and using only NULL listener_ is fine. Longer term, perhaps the right approach is moving the filter functionality in to Channel. Then you would be able to just use IPC::Channel and create it directly on the IO thread. > My proposal is slightly different from yours. > Here is the draft. crrev.com/239583002 > > This CL is not conflicting with your DCHECK CL. I think I see why you want to do it, but I don't think ipc OWNERS will like it; I think it makes ChannelProxy even harder to understand & support than it already is for not much gain. > > > > >> > [C] Refactor ChannelProxy somehow. >> > Now, we don't need full function of ChannelProxy. What we need are: >> > 1) Running IPC tasks on background thread. >> > 2) Handling MessageFilter properly. >> > and we do not need, "Handling messages on the created thread", feature. >> > Theoretically, we probably can decouple them. Though, I still wonder if >> it >> could >> > be clean APIs >> > (maybe not I'm feeling now, to be honest...) >> Hmm, there *might* be something to this. As I mentioned, we have another >> place >> already using ChannelProxy in the unintended way (think maybe it's >> > NaClListener) > >> that could benefit from this. And I ran across another piece of code that >> reimplemented filters. This might be worth thinking about, although we >> should >> try not to block you on that kind of refactoring. >> > > > >> > [D] Implement what we need for IPC channel. >> > We may be able to give up to use SyncMessageFilter and sync-IPC-message. >> > Instead, we could manually build sync-like IPC message with >> > async-IPC-messages > >> > and some locks (or cond vers etc.). >> > Mark suggested me that: >> > 1) When open_resource() is called, take a lock and post a task to the IO >> Thread. >> > 2) IOThread sends an async message to the peer process. >> > 3) When peer process has done the task, let it send back another async >> > message > >> > to the plugin as reply. >> > 4) When reply arrives to plugin, IOThread again process it, and then >> unblock >> the >> > calling thread. >> > This is what we're doing in SFI-mode's irt_open_resource(). >> That's basically what SyncMessageFilter does. It would be a shame to >> basically >> rewrite it :( >> > > > >> > [E] Any other options? >> My current leaning is to go this way: >> 1) If we can just create the ChannelProxy on the main thread without >> negative >> repercussions, do that. >> 2) If we're really sure (1) is not an option, go ahead with creating on >> the IO >> thread for now, and in the background pursue adding a "sanctioned" way to >> support Filters with a Channel-thing created on the IO thread. >> > > So, then now I'm thinking that [B] would be a good option? > - We do not introduce new limitation to the plugin by this change. > I'm still not convinced it's a new limitation. At the very least, we don't *know* if it is. If this is a feature you want us to support, would you be willing to add a test to make sure we don't regress it? Personally, I don't think it's an important feature. Even if it happens to work today, I think it's by luck. Maybe Mark can correct me if that's actually an planned and intended feature. If there's an app doing this right now, if I understand correctly, that means that they declined to use the IRT shim that's built in to the SDK, *and* launched a new thread from which to call irt_ppapi_start. I put the odds that anybody is doing that as very small. > - We do not "hack" ChannelProxy by sharing IO thread and Listener thread. > WDYT, Dave, Mark? > > > > https://codereview.chromium.org/231793003/ > > To unsubscribe from this group and stop receiving emails from it, send an > email to chromium-reviews+unsubscribe@chromium.org. > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
Thank you for reply. Commented inline. (Removed old messages, as I hit the limit of the length of messages...) > If we: > 1) We don't ever receive messages via ManifestChannel (or whatever it's > named) on that thread. > and > 2) The plugin calls irt_ppapi_start() from a different thread (thus making > that thread the "Pepper main thread") > ...then does it matter that the message loop never gets run again? > > I don't think this is ideal, because it's still counter to how ChannelProxy > is supposed to work, but it's probably workable in the near term. > > I'm confused. The reason why you wanted me to avoid sharing Listener thread and IO thread is just because the ChannelProxy is not initially designed to work with them, right? I don't think ChannelProxy is designed to use with non-working message loop. I'd like to understand the difference in your perspective; - Why it is ok for you to use the message loop (maybe) suspended forever with Quit() hack, but it is not ok to share threads (message loops)? To me, touching message loop looks slightly more danger, because it is widely used and I won't be surprised even if there are couple of CHECKs/DCHECKs with some thread local conditions. If it is ok to keep the message loop suspended forever, it should probably work, but I still need to understand well why you think it is safer than sharing Listener/IO threads. > To me, it sounds risky today if a plugin tries to do it. Have you tried it > to see if it works at all in the current implementation to call > irt_ppapi_start from a different thread? > > Using the same thread sounds desirable to me, and I feel like if we'd > thought hard enough about it before, we might have explicitly imposed that > restriction. As it is, I'm not sure whether the restriction already exists > implicitly. If we *do* want to support that kind of strange behavior, I > feel like we ought to have a test for it, otherwise we're likely to break > it in the future accidentally. > To me, it does not matter whether we *do* want to support such a behavior or not. I'm saying I'd like to keep the behavior "as is". For this perspective, we are not sure the behavior is really supported or not. At least you're suggesting to introduce the limitation explicitly. Indeed, it may already exist, but may not. I'm saying this is risky, because we are not sure, also it's undocumented. IMHO, deciding whether it is expected usage or not also needs father more investigation and discussion. To be honest, I'm fine either, but I'd like to split such a decision from this task. Can we separate it from this, with probably TODO and filing a bug? > I understood what you meant to do to ChannelProxy. I think it wasn't clear > to me what your goal was or how it was better than just using a NULL > listener_. Now I think you're saying that if you did it that way, you could > avoid creating a MessageLoop prior to creating the ChannelProxy on that > thread? I guess maybe you're worried about posting tasks that never run, if > that message loop is not pumped again after the task is posted? I think in > the short term we don't need to worry about that, and using only NULL > listener_ is fine. > > Longer term, perhaps the right approach is moving the filter functionality > in to Channel. Then you would be able to just use IPC::Channel and create > it directly on the IO thread. > (This is off topic): IPC::Channel is just a set of sync APIs. It does not have any thread concept. So, moving MessageFilter to IPC::Channel would not be an ideal approach, IMHO. What we need is removing the limitation of Listener thread from ChannelProxy somehow. Anyway, let's hold on this topic, until when we need to revisit here. > I'm still not convinced it's a new limitation. At the very least, we don't > *know* if it is. If this is a feature you want us to support, would you be > willing to add a test to make sure we don't regress it? > > Personally, I don't think it's an important feature. Even if it happens to > work today, I think it's by luck. Maybe Mark can correct me if that's > actually an planned and intended feature. If there's an app doing this > right now, if I understand correctly, that means that they declined to use > the IRT shim that's built in to the SDK, *and* launched a new thread from > which to call irt_ppapi_start. I put the odds that anybody is doing that as > very small. > Mark, any thoughts on Dave's comment? Summary: - At the moment, I would like to avoid introducing the thread limitation explicitly. It may cause a regression, although we are not sure right now. I'd like to split the discussion from this task. - I still need to understand the difference of the safeness you think between ChannelProxy with suspended message loop and ChannelProxy with sharing Listener/IO threads. - Creating a new thread is then still an option. It is the cleanest way to implement in the current our options. (Though I understand you don't like to introduce a new thread.) WDYT? Thanks, - hidehiko
On Tue, Apr 15, 2014 at 7:22 PM, <hidehiko@chromium.org> wrote: > Thank you for reply. Commented inline. > (Removed old messages, as I hit the limit of the length of messages...) > > > If we: >> 1) We don't ever receive messages via ManifestChannel (or whatever it's >> named) on that thread. >> and >> 2) The plugin calls irt_ppapi_start() from a different thread (thus making >> that thread the "Pepper main thread") >> ...then does it matter that the message loop never gets run again? >> > > I don't think this is ideal, because it's still counter to how >> ChannelProxy >> is supposed to work, but it's probably workable in the near term. >> > > > > I'm confused. The reason why you wanted me to avoid sharing Listener > thread and > IO thread is > just because the ChannelProxy is not initially designed to work with them, > right? > I don't think ChannelProxy is designed to use with non-working message > loop. > I'd like to understand the difference in your perspective; > - Why it is ok for you to use the message loop (maybe) suspended forever > with > Quit() hack, > but it is not ok to share threads (message loops)? > Long term, I think it's not OK to share threads. I think near-term (for this CL), we can probably do either: 1) Create it on the IO thread with a NULL listener. or 2) Create it on the main thread. *If* the message loop never gets pumped, it's about equivalent in impact to 1. The nice thing about (2), is in the common case, the plugin will call irt_ppapi_start on that main thread eventually, and the message loop will get pumped. So in the common case, ChannelProxy works like usual. In the uncommon case, I was struggling to think of what would actually break. I'm realizing now; I think what you're concerned about is that you would have to create a MessageLoop early, prior to making the IrtServiceChannel (or whatever). Later on, PpapiPluginMain currently creates a MessageLoop. So if you don't support a different thread for irt_ppapi_start, it's easy... you just let the MessageLoop live on and you reuse it in PpapiPluginMain. But if you need to support different threads for irt_ppapi_start, you have two choices: 1) Use a temporary message loop for creation of IrtServiceChannel (this seems bad and possibly dangerous). In this case, PpapiPluginMain unconditionally creates a MessageLoop like it does today. 2) IrtServiceChannel creates a message loop that lives on. PpapiPluginMain has to detect if its thread already has a message loop, and creates one if one doesn't exist. So I can see that these are both a little scary (number 2 a bit less scary than 1). Again, though, we don't have to worry about it if we don't support irt_ppapi_start on weird threads. So I would like us to look in to that a little bit, maybe outside of this CL. > To me, touching message loop looks slightly more danger, because it is > widely > used and > I won't be surprised even if there are couple of CHECKs/DCHECKs with some > thread > local conditions. > I agree that if ChannelProxy is created with 1 message loop on that stack, and then we destroy it and create a new one later on that thread, that would be bad. I'm not sure if that's what you're talking about. > If it is ok to keep the message loop suspended forever, it should probably > work, > but I still need to > understand well why you think it is safer than sharing Listener/IO threads. In the common case, ChannelProxy would be operating in the way it was designed. In rare circumstances, its tasks might not run. But if its tasks aren't run, then that's about equivalent to what happens if ChannelProxy is created on the IO thread (the tasks run but do nothing). However, I think the trouble with MessageLoop lifetimes I described above means maybe we shouldn't do this. Right now, I think maybe creating on the IO thread with a NULL listener is the safest way to make progress for now, but we need to revisit it. I'm thinking I can refactor IPC::Channel and filters a bit to make it workable for you to switch to using Channel. > > > To me, it sounds risky today if a plugin tries to do it. Have you tried it >> to see if it works at all in the current implementation to call >> irt_ppapi_start from a different thread? >> > > Using the same thread sounds desirable to me, and I feel like if we'd >> thought hard enough about it before, we might have explicitly imposed that >> restriction. As it is, I'm not sure whether the restriction already exists >> implicitly. If we *do* want to support that kind of strange behavior, I >> feel like we ought to have a test for it, otherwise we're likely to break >> it in the future accidentally. >> > > > To me, it does not matter whether we *do* want to support such a behavior > or > not. > I'm saying I'd like to keep the behavior "as is". > It sounds like we don't know what the actual behavior *is* right now. We never test for this. I don't like over-complicating the design of this for a requirement that we (a) never test and (b) might already not support. > > For this perspective, we are not sure the behavior is really supported or > not. > At least you're suggesting to introduce the limitation explicitly. > Indeed, it may already exist, but may not. I'm saying this is risky, > because we > are not sure, also it's undocumented. > IMHO, deciding whether it is expected usage or not also needs father more > investigation and discussion. > To be honest, I'm fine either, but I'd like to split such a decision from > this > task. > Can we separate it from this, with probably TODO and filing a bug? OK, I think we can do this and go with the simplest approach for now (ChannelProxy on IO thread with NULL listener). But we really should test this behavior to see if we support it all. If we *do* support it, and we want to continue supporting it, we need a regression test, or it will get broken later. Continuing down the path of constraining and complicating this design because we're worried about breaking a feature we *don't know if we support today* sounds like a very bad approach in the longer term. *If* it's a real constraint, then fine, but we need to be sure. > > > I understood what you meant to do to ChannelProxy. I think it wasn't clear >> to me what your goal was or how it was better than just using a NULL >> listener_. Now I think you're saying that if you did it that way, you >> could >> avoid creating a MessageLoop prior to creating the ChannelProxy on that >> thread? I guess maybe you're worried about posting tasks that never run, >> if >> that message loop is not pumped again after the task is posted? I think in >> the short term we don't need to worry about that, and using only NULL >> listener_ is fine. >> > > Longer term, perhaps the right approach is moving the filter functionality >> in to Channel. Then you would be able to just use IPC::Channel and create >> it directly on the IO thread. >> > > > (This is off topic): > IPC::Channel is just a set of sync APIs. It does not have any thread > concept. > Correct... IPC::Channel would live entirely on the IO thread. You could only send messages on it from the IO thread. It would invoke methods on its filters exclusively on the IO thread. This is how you are trying to use ChannelProxy today (or at least how SyncMessageFilter uses it on your behalf). > So, moving MessageFilter to IPC::Channel would not be an ideal approach, > IMHO. > Why not? You're *really* using SyncMessageFilter to deal with your IPC threading issues. What I'm proposing would make it possible to use SyncMessageFilter work with Channel, without requiring ChannelProxy. SyncMessageFilter already knows how to PostTask for sending messages on the IO thread. It already gets its Filter messages on the IO thread. > What we need is removing the limitation of Listener thread from > ChannelProxy > somehow. > Anyway, let's hold on this topic, until when we need to revisit here. I'm going to work on this on the side to try to make it possible to replace your usage of ChannelProxy with Channel (and same for NaClProcessHost). > > > I'm still not convinced it's a new limitation. At the very least, we don't >> *know* if it is. If this is a feature you want us to support, would you be >> willing to add a test to make sure we don't regress it? >> > > Personally, I don't think it's an important feature. Even if it happens to >> work today, I think it's by luck. Maybe Mark can correct me if that's >> actually an planned and intended feature. If there's an app doing this >> right now, if I understand correctly, that means that they declined to use >> the IRT shim that's built in to the SDK, *and* launched a new thread from >> which to call irt_ppapi_start. I put the odds that anybody is doing that >> as >> very small. >> > > > Mark, any thoughts on Dave's comment? > > Summary: > - At the moment, I would like to avoid introducing the thread limitation > explicitly. > It may cause a regression, although we are not sure right now. > I'd like to split the discussion from this task. > We can hold off on that, but we need a test for it. > - I still need to understand the difference of the safeness you think > between > ChannelProxy with suspended message loop > and ChannelProxy with sharing Listener/IO threads. > - Creating a new thread is then still an option. It is the cleanest way to > implement in the current our options. > Let's just create the ChannelProxy on the IO thread for now; I think it's the safest you can do without some refactoring. I'll try to fix up Channel so you can use it instead of ChannelProxy. > (Though I understand you don't like to introduce a new thread.) > > WDYT? > > > Thanks, > - hidehiko > > > https://codereview.chromium.org/231793003/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
Updated the CL. PTAL? (Now trybots are running). Mark, I renamed the service to IrtService, as Dave recommended. It sounds a bit more suitable to me. WDYT? Dave, I've really appreciated your comments and discussion. So, let's create ChannelProxy on IO thread as a short term solution. Please let me know if there are anything I can help for the clean up. Also, let's revisit if we are or are not actually supporting invocation of irt_ppapi_start on the non-startup thread now, and whether we want to/should continue to support it (if now it is supported). > Why not? You're *really* using SyncMessageFilter to deal with your IPC > threading issues. What I'm proposing would make it possible to use > SyncMessageFilter work with Channel, without requiring ChannelProxy. > SyncMessageFilter already knows how to PostTask for sending messages on the > IO thread. It already gets its Filter messages on the IO thread. Oh, I got it. Sorry I misunderstood your approach. It should work, I think. Thanks! - hidehiko https://codereview.chromium.org/231793003/diff/60001/components/nacl/renderer... File components/nacl/renderer/embedder_service_channel.cc (right): https://codereview.chromium.org/231793003/diff/60001/components/nacl/renderer... components/nacl/renderer/embedder_service_channel.cc:33: // TODO(hidehiko): Impelement StartCompleted and OpenResource. On 2014/04/14 17:10:47, Mark Seaborn wrote: > "Implement" Done.
On 16 April 2014 11:15, <hidehiko@chromium.org> wrote: > Updated the CL. PTAL? (Now trybots are running). > > Mark, I renamed the service to IrtService, as Dave recommended. > It sounds a bit more suitable to me. WDYT? I'd still prefer ManifestService/ManifestChannel, which Dave said he was OK with. It's just more specific than IrtService. The PPAPI proxy is part of the IRT, so the name "IrtService" does not make it clear that this channel is not used for PPAPI. Mark To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2014/04/16 18:22:51, Mark Seaborn wrote: > On 16 April 2014 11:15, <mailto:hidehiko@chromium.org> wrote: > > > Updated the CL. PTAL? (Now trybots are running). > > > > Mark, I renamed the service to IrtService, as Dave recommended. > > It sounds a bit more suitable to me. WDYT? > > > I'd still prefer ManifestService/ManifestChannel, which Dave said he was OK > with. It's just more specific than IrtService. The PPAPI proxy is part of > the IRT, so the name "IrtService" does not make it clear that this channel > is not used for PPAPI. OK, renamed to ManifestService. PTAL. Thanks, - hidehiko > > Mark > > To unsubscribe from this group and stop receiving emails from it, send an email > to mailto:chromium-reviews+unsubscribe@chromium.org.
On 2014/04/16 18:54:02, hidehiko wrote: > On 2014/04/16 18:22:51, Mark Seaborn wrote: > > On 16 April 2014 11:15, <mailto:hidehiko@chromium.org> wrote: > > > > > Updated the CL. PTAL? (Now trybots are running). > > > > > > Mark, I renamed the service to IrtService, as Dave recommended. > > > It sounds a bit more suitable to me. WDYT? > > > > > > I'd still prefer ManifestService/ManifestChannel, which Dave said he was OK > > with. It's just more specific than IrtService. The PPAPI proxy is part of > > the IRT, so the name "IrtService" does not make it clear that this channel > > is not used for PPAPI. > > OK, renamed to ManifestService. PTAL. > > Thanks, > - hidehiko > > > > > Mark > > > > To unsubscribe from this group and stop receiving emails from it, send an > email > > to mailto:chromium-reviews+unsubscribe@chromium.org. FWIW, I think ManifestService will someday do more than just Manifest things, but we can change the name at that time. lgtm I'll follow up about replacing ChannelProxy with Channel. I filed a bug here: crbug.com/364241 Please add a reference to the bug at your TODO.
On 2014/04/16 21:41:18, dmichael wrote: > On 2014/04/16 18:54:02, hidehiko wrote: > > On 2014/04/16 18:22:51, Mark Seaborn wrote: > > > On 16 April 2014 11:15, <mailto:hidehiko@chromium.org> wrote: > > > > > > > Updated the CL. PTAL? (Now trybots are running). > > > > > > > > Mark, I renamed the service to IrtService, as Dave recommended. > > > > It sounds a bit more suitable to me. WDYT? > > > > > > > > > I'd still prefer ManifestService/ManifestChannel, which Dave said he was OK > > > with. It's just more specific than IrtService. The PPAPI proxy is part of > > > the IRT, so the name "IrtService" does not make it clear that this channel > > > is not used for PPAPI. > > > > OK, renamed to ManifestService. PTAL. > > > > Thanks, > > - hidehiko > > > > > > > > Mark > > > > > > To unsubscribe from this group and stop receiving emails from it, send an > > email > > > to mailto:chromium-reviews+unsubscribe@chromium.org. > FWIW, I think ManifestService will someday do more than just Manifest things, > but we can change the name at that time. > > lgtm > > I'll follow up about replacing ChannelProxy with Channel. I filed a bug here: > crbug.com/364241 > Please add a reference to the bug at your TODO. Thank you for review, Dave. I've added bug to the TODO. Mark, Julien, could you take a look? Thanks, - hidehiko
*_messages.h lgtm based on dmichael and Mark's review, given that no new serialization / deserialization is done. Please let me know if there is something you would like a closer security look on.
On 15 April 2014 18:22, <hidehiko@chromium.org> wrote: > > I'm still not convinced it's a new limitation. At the very least, we don't >> *know* if it is. If this is a feature you want us to support, would you be >> willing to add a test to make sure we don't regress it? >> > > Personally, I don't think it's an important feature. Even if it happens to >> work today, I think it's by luck. Maybe Mark can correct me if that's >> actually an planned and intended feature. If there's an app doing this >> right now, if I understand correctly, that means that they declined to use >> the IRT shim that's built in to the SDK, *and* launched a new thread from >> which to call irt_ppapi_start. I put the odds that anybody is doing that >> as >> very small. >> > I think it has never been the case that the NaCl SDK required user code to use the main() provided by the SDK. The default main() is just a convenience function which calls PpapiPluginMain(), which is another convenience function which calls the IRT's ppapi_start(). In the past, we've told people that they can call PpapiPluginMain() if they find it more convenient to define main() themselves -- for example, in an existing codebase where changing main() could be awkward. User code might potentially do a lot of initialisation before starting PPAPI. It seems plausible that user code would call ppapi_start() from another thread in an app that creates threads as part of its initialisation. Since PNaCl is available on the open web now, there isn't a closed set of programs we can check for this property, so we have to err on the side of caution. Mark, any thoughts on Dave's comment? > > Summary: > - At the moment, I would like to avoid introducing the thread limitation > explicitly. > It may cause a regression, although we are not sure right now. > I'd like to split the discussion from this task. I tried using ppapi_start() on a non-initial thread, and it worked fine. I am not keen on the idea of adding a restriction, because: * It would be a change to the ABI, and the ABI is supposed to be stable. We should only change the ABI if there is a really compelling reason to do so. * It would be inconsistent with the other NaCl IRT interfaces, which are all agnostic about which thread they are called from. None of them single out the initial thread as being special. I do agree that we should add a test for this. If the reason for wanting to add a restriction is that IPC::SyncChannel, IPC::ChannelProxy and the associated plumbing have various abstruse limitations, that doesn't seem like a very compelling reason to me. :-) That seems like a good reason for cleaning up the IPC layer to make it simpler and easier to understand, or at least to document the limitations better. In principle, the task of sending a synchronous IPC message should be really simple. Cheers, Mark To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
LGTM
The CQ bit was checked by hidehiko@chromium.org
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/hidehiko@chromium.org/231793003/220001
Message was sent while issue was closed.
Change committed as 264477 |