|
|
Created:
5 years, 3 months ago by sunnyps Modified:
5 years, 3 months ago CC:
chromium-reviews, darin-cc_chromium.org, David Yen, jam, piman+watch_chromium.org Base URL:
https://chromium.googlesource.com/chromium/src.git@master Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
Descriptioncontent/gpu: Simplify gpu channel message handling.
This CL makes the following changes to gpu channel message handling:
1. The GpuChannelMessageQueue interface is exposed in the header so that
it may be used by the upcoming gpu scheduler.
2. Removed friend classes in GpuChannel.
3. Messages are no longer popped from the message queue and reinserted
when retrying. Instead messages are only popped after they have been
handled.
4. As a result, we no longer synthesize the Reschedule and
RetireSyncPoint IPC messages.
BUG=514813
Committed: https://crrev.com/4a6a3d8c391412023e86a755fd35840baa22e578
Cr-Commit-Position: refs/heads/master@{#349584}
Patch Set 1 #
Total comments: 29
Patch Set 2 : dyen + dcheng review #Patch Set 3 : couple of things missed in the last patch #
Total comments: 10
Patch Set 4 : piman's review #Patch Set 5 : handle preemption better #
Total comments: 5
Patch Set 6 : piman #
Total comments: 11
Patch Set 7 : dcheng's review #
Total comments: 6
Patch Set 8 : fix compile error and dcheck fail #Patch Set 9 : dcheng's final comments #
Messages
Total messages: 37 (7 generated)
sunnyps@chromium.org changed reviewers: + dcheng@chromium.org, piman@chromium.org
PTAL This is the first step in making message queue hold all the state for a stream. The plan is that the gpu scheduler would hold references to the message queue so that it can look at them on both the io thread and the main thread. dcheng@ Please review gpu_messages.h - the only change made there is that I've removed GpuCommandBufferMsg_Reschedule and changed the comment about synthesizing the RetireSyncPoint message.
+dyen@
dyen@chromium.org changed reviewers: + dyen@chromium.org
https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... File content/common/gpu/gpu_channel.h (left): https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.h:182: uint32_t GetCurrentOrderNum() const { return current_order_num_; } Was this accidentally deleted? I'm using this in my other patch.
dcheng@chromium.org changed reviewers: - dyen@chromium.org
IPC changes are fine, but some other thoughts. https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... File content/common/gpu/gpu_channel.cc (right): https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.cc:117: DCHECK(message.type() == GpuCommandBufferMsg_InsertSyncPoint::ID); DCHECK_EQ https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.cc:178: } Won't this leak the popped element? https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.cc:894: if (preempting_flag_.get()) { if (preempting_flag_) { https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... File content/common/gpu/gpu_channel.h (right): https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.h:408: base::WeakPtr<GpuChannel> gpu_channel, const base::WeakPtr<GpuChannel>& here and in the ctor? https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.h:415: uint32_t GetProccessedOrderNum() const; FWIW, simple accessors are usually inline and named like this: uint32_t processed_order_num() const { return processed_order_num_; } https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.h:447: void PushMessageHelper(GpuChannelMessage* msg); scoped_ptr<GpuChannelMessage> msg to make the ownership semantics clear. https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.h:455: std::deque<GpuChannelMessage*> channel_messages_; Ownership semantics for these members should be documented. https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_comm... File content/common/gpu/gpu_command_buffer_stub.cc (right): https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_comm... content/common/gpu/gpu_command_buffer_stub.cc:783: DCHECK(command_buffer_.get()); DCHECK(command_buffer_); https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_comm... content/common/gpu/gpu_command_buffer_stub.cc:787: DCHECK(flush_count - last_flush_count_ < 0x8000000U) DCHECK_LT
dyen@chromium.org changed reviewers: + dyen@chromium.org
https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... File content/common/gpu/gpu_channel.cc (right): https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.cc:316: const uint32_t order_number = global_order_counter_++; Did you mean to use GpuChannelManager::GetNextUnprocessedOrderNum() here? https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.cc:885: if (stub->IsPreempted()) Don't we want to check for premption before we handle the message? https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.cc:891: ScheduleHandleMessage(); Did you intended to make messages in a channel only scheduled after the handling of a message is done? I'm not sure how that will affect scheduling. https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... File content/common/gpu/gpu_channel_manager.cc (right): https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel_manager.cc:255: base::AutoLock lock(order_num_lock_); Is the new scheduler also going to use this? Seems like we exchanged a lock on all the channels during pre-emption, for a lock on every IPC message, which is better is hard to tell. Should we just use an Atomic32?
sunnyps@chromium.org changed reviewers: + jbauman@chromium.org
PTAL Also +jbauman https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... File content/common/gpu/gpu_channel.cc (right): https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.cc:117: DCHECK(message.type() == GpuCommandBufferMsg_InsertSyncPoint::ID); On 2015/09/10 21:30:37, dcheng wrote: > DCHECK_EQ Done. https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.cc:178: } On 2015/09/10 21:30:37, dcheng wrote: > Won't this leak the popped element? Thanks for catching this. Fixed this for now but I think GpuChannelMessage should be copyable - it holds an IPC::Message plus a few other things. WDYT? https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.cc:316: const uint32_t order_number = global_order_counter_++; On 2015/09/10 22:42:06, David Yen wrote: > Did you mean to use GpuChannelManager::GetNextUnprocessedOrderNum() here? Yes. Thanks for catching this. I've removed the shared global counter from the channel manager so this is as it was before. https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.cc:885: if (stub->IsPreempted()) On 2015/09/10 22:42:06, David Yen wrote: > Don't we want to check for premption before we handle the message? The flush will early out if the stub is preempted or descheduled. https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.cc:891: ScheduleHandleMessage(); On 2015/09/10 22:42:06, David Yen wrote: > Did you intended to make messages in a channel only scheduled after the handling > of a message is done? I'm not sure how that will affect scheduling. The first HandleMessage is called from the message queue if it was empty before it received the message. Subsequent HandleMessage calls are scheduled by the channel itself. When the scheduler arrives it will be responsible for calling HandleMessage so some of this code will move to the scheduler. https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.cc:894: if (preempting_flag_.get()) { On 2015/09/10 21:30:37, dcheng wrote: > if (preempting_flag_) { Done. https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... File content/common/gpu/gpu_channel.h (left): https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.h:182: uint32_t GetCurrentOrderNum() const { return current_order_num_; } On 2015/09/10 21:23:04, David Yen wrote: > Was this accidentally deleted? I'm using this in my other patch. Done. Renamed to current_order_num per naming convention. https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... File content/common/gpu/gpu_channel.h (right): https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.h:408: base::WeakPtr<GpuChannel> gpu_channel, On 2015/09/10 21:30:37, dcheng wrote: > const base::WeakPtr<GpuChannel>& here and in the ctor? Done. Also removed the unnecessary refcounting here and in the ctor. https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.h:415: uint32_t GetProccessedOrderNum() const; On 2015/09/10 21:30:37, dcheng wrote: > FWIW, simple accessors are usually inline and named like this: > > uint32_t processed_order_num() const { return processed_order_num_; } Done. https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.h:447: void PushMessageHelper(GpuChannelMessage* msg); On 2015/09/10 21:30:37, dcheng wrote: > scoped_ptr<GpuChannelMessage> msg to make the ownership semantics clear. Done. https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel.h:455: std::deque<GpuChannelMessage*> channel_messages_; On 2015/09/10 21:30:37, dcheng wrote: > Ownership semantics for these members should be documented. Done. https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... File content/common/gpu/gpu_channel_manager.cc (right): https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel_manager.cc:255: base::AutoLock lock(order_num_lock_); On 2015/09/10 22:42:06, David Yen wrote: > Is the new scheduler also going to use this? Seems like we exchanged a lock on > all the channels during pre-emption, for a lock on every IPC message, which is > better is hard to tell. > > Should we just use an Atomic32? On second thought I'm keeping this as is for now - it's likely that the schedule will be in charge of idle work anyway. https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_comm... File content/common/gpu/gpu_command_buffer_stub.cc (right): https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_comm... content/common/gpu/gpu_command_buffer_stub.cc:783: DCHECK(command_buffer_.get()); On 2015/09/10 21:30:38, dcheng wrote: > DCHECK(command_buffer_); Done. https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_comm... content/common/gpu/gpu_command_buffer_stub.cc:787: DCHECK(flush_count - last_flush_count_ < 0x8000000U) On 2015/09/10 21:30:37, dcheng wrote: > DCHECK_LT Done.
https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... File content/common/gpu/gpu_channel.cc (right): https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... content/common/gpu/gpu_channel.cc:103: if (enabled_) nit: needs {} per style. https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... content/common/gpu/gpu_channel.cc:849: << " with type " << message.type(); Are you saying that we have a strong guarantee that if we get here (HandleMessage was posted and we had a message), then the stub is necessarily scheduled? It's definitely worth a DCHECK, but I'm not 100% convinced. For example, I suspect it's possible to be unscheduled and preempted at the same time (if only because IsPreempted can change at any time on the IO thread), in which case we may post a task on l.888 even though we're unscheduled. https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... File content/common/gpu/gpu_channel.h (right): https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... content/common/gpu/gpu_channel.h:469: uint32_t current_order_num_; This is not referenced anywhere anymore. Remove/restore? (dyen@ needs it though... maybe move it to GpuChannel since it's not a property of the queue so much as a current execution state). https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... File content/common/gpu/gpu_command_buffer_stub.cc (right): https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... content/common/gpu/gpu_command_buffer_stub.cc:788: << "Received a Flush message out-of-order"; We really shouldn't make it a DCHECK (NOTREACHED was kinda wrong too), because we're checking untrusted data. We can DVLOG though (all that can happen is the client screwing itself up).
PTAL https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... File content/common/gpu/gpu_channel.cc (right): https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... content/common/gpu/gpu_channel.cc:103: if (enabled_) On 2015/09/11 00:38:33, piman (slow to review) wrote: > nit: needs {} per style. Done. https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... content/common/gpu/gpu_channel.cc:849: << " with type " << message.type(); On 2015/09/11 00:38:33, piman (slow to review) wrote: > Are you saying that we have a strong guarantee that if we get here > (HandleMessage was posted and we had a message), then the stub is necessarily > scheduled? It's definitely worth a DCHECK, but I'm not 100% convinced. For > example, I suspect it's possible to be unscheduled and preempted at the same > time (if only because IsPreempted can change at any time on the IO thread), in > which case we may post a task on l.888 even though we're unscheduled. I think this is true most of the time - the message queue only posts HandleMessage when there's a new message and it was previously empty. The GpuChannel posts HandleMessage only when there are more messages on the message queue. The only thing I'm worried is the race between HandleMessage posted from the message queue and the GpuChannel calling DeleteAndDestroyMessages on the message queue. Eventually I would like that this condition holds so that we don't have any unnecessary PostTasks. https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... File content/common/gpu/gpu_channel.h (right): https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... content/common/gpu/gpu_channel.h:469: uint32_t current_order_num_; On 2015/09/11 00:38:33, piman (slow to review) wrote: > This is not referenced anywhere anymore. Remove/restore? > > (dyen@ needs it though... maybe move it to GpuChannel since it's not a property > of the queue so much as a current execution state). Done. https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... File content/common/gpu/gpu_command_buffer_stub.cc (right): https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... content/common/gpu/gpu_command_buffer_stub.cc:788: << "Received a Flush message out-of-order"; On 2015/09/11 00:38:34, piman (slow to review) wrote: > We really shouldn't make it a DCHECK (NOTREACHED was kinda wrong too), because > we're checking untrusted data. > We can DVLOG though (all that can happen is the client screwing itself up). Done.
https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... File content/common/gpu/gpu_channel.cc (right): https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... content/common/gpu/gpu_channel.cc:849: << " with type " << message.type(); On 2015/09/11 01:53:22, sunnyps wrote: > On 2015/09/11 00:38:33, piman (slow to review) wrote: > > Are you saying that we have a strong guarantee that if we get here > > (HandleMessage was posted and we had a message), then the stub is necessarily > > scheduled? It's definitely worth a DCHECK, but I'm not 100% convinced. For > > example, I suspect it's possible to be unscheduled and preempted at the same > > time (if only because IsPreempted can change at any time on the IO thread), in > > which case we may post a task on l.888 even though we're unscheduled. > > I think this is true most of the time - the message queue only posts > HandleMessage when there's a new message and it was previously empty. The > GpuChannel posts HandleMessage only when there are more messages on the message > queue. The only thing I'm worried is the race between HandleMessage posted from > the message queue and the GpuChannel calling DeleteAndDestroyMessages on the > message queue. > > Eventually I would like that this condition holds so that we don't have any > unnecessary PostTasks. What about the example I mentioned? If on a previous HandleMessage, the stub got unscheduled (i.e. still has unprocessed commands), but was also preempted, we would post a task on l.888. When we get here, the stub is still unscheduled, yet we would still process messages (e.g. retire sync points). That's invalid. Similarly, if we get here but are preempted, we'll still process messages, even though we shouldn't.
https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... File content/common/gpu/gpu_channel.cc (right): https://codereview.chromium.org/1336623004/diff/40001/content/common/gpu/gpu_... content/common/gpu/gpu_channel.cc:849: << " with type " << message.type(); On 2015/09/11 02:30:07, piman (slow to review) wrote: > On 2015/09/11 01:53:22, sunnyps wrote: > > On 2015/09/11 00:38:33, piman (slow to review) wrote: > > > Are you saying that we have a strong guarantee that if we get here > > > (HandleMessage was posted and we had a message), then the stub is > necessarily > > > scheduled? It's definitely worth a DCHECK, but I'm not 100% convinced. For > > > example, I suspect it's possible to be unscheduled and preempted at the same > > > time (if only because IsPreempted can change at any time on the IO thread), > in > > > which case we may post a task on l.888 even though we're unscheduled. > > > > I think this is true most of the time - the message queue only posts > > HandleMessage when there's a new message and it was previously empty. The > > GpuChannel posts HandleMessage only when there are more messages on the > message > > queue. The only thing I'm worried is the race between HandleMessage posted > from > > the message queue and the GpuChannel calling DeleteAndDestroyMessages on the > > message queue. > > > > Eventually I would like that this condition holds so that we don't have any > > unnecessary PostTasks. > > What about the example I mentioned? > If on a previous HandleMessage, the stub got unscheduled (i.e. still has > unprocessed commands), but was also preempted, we would post a task on l.888. > When we get here, the stub is still unscheduled, yet we would still process > messages (e.g. retire sync points). That's invalid. > > Similarly, if we get here but are preempted, we'll still process messages, even > though we shouldn't. I misunderstood your original comment thinking it was about the TODO above. You're right that it's possible that we're descheduled or preempted and still process messages. I'll try to fix that in the next patch.
https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... File content/common/gpu/gpu_channel_manager.cc (right): https://codereview.chromium.org/1336623004/diff/1/content/common/gpu/gpu_chan... content/common/gpu/gpu_channel_manager.cc:255: base::AutoLock lock(order_num_lock_); On 2015/09/10 23:38:36, sunnyps wrote: > On 2015/09/10 22:42:06, David Yen wrote: > > Is the new scheduler also going to use this? Seems like we exchanged a lock on > > all the channels during pre-emption, for a lock on every IPC message, which is > > better is hard to tell. > > > > Should we just use an Atomic32? > > On second thought I'm keeping this as is for now - it's likely that the schedule > will be in charge of idle work anyway. per your comment yesterday, I think you are right. We can detect idle by simply using the the highest current order number. We can even have a global one stored in the GpuChannelManager which is updated as they are being handled and we wouldn't even need to loop through the gpu channels.
PTAL Fixed the issues with preemption. I think this is better than what we had before because we stop processing messages when preempted even if it's not a command buffer message.
ping PTAL
lgtm https://codereview.chromium.org/1336623004/diff/80001/content/common/gpu/gpu_... File content/common/gpu/gpu_channel.cc (right): https://codereview.chromium.org/1336623004/diff/80001/content/common/gpu/gpu_... content/common/gpu/gpu_channel.cc:859: << " with type " << message.type(); Can you add a DCHECK(!stub || stub->IsScheduled())? It may be true today that we only unschedule as the side effect of commands, but that wasn't always the case. https://codereview.chromium.org/1336623004/diff/80001/content/common/gpu/gpu_... File content/common/gpu/gpu_command_buffer_stub.cc (right): https://codereview.chromium.org/1336623004/diff/80001/content/common/gpu/gpu_... content/common/gpu/gpu_command_buffer_stub.cc:788: DVLOG(0) << "Received a Flush message out-of-order"; nit: use DVLOG_IF
https://codereview.chromium.org/1336623004/diff/80001/content/common/gpu/gpu_... File content/common/gpu/gpu_channel.cc (right): https://codereview.chromium.org/1336623004/diff/80001/content/common/gpu/gpu_... content/common/gpu/gpu_channel.cc:859: << " with type " << message.type(); On 2015/09/15 23:05:11, piman (slow to review) wrote: > Can you add a DCHECK(!stub || stub->IsScheduled())? > > It may be true today that we only unschedule as the side effect of commands, but > that wasn't always the case. I don't think this DCHECK holds for cases where the last command in a command buffer deschedules the command buffer. However we should handle that case correctly - AsyncFlush will just early out. Would you prefer adding an early out condition in HandleMessage instead? I have an upcoming CL in which the stream keeps track of its "CanRun" state which is a function of preemption flag, front message, and the IsScheduled state of all the stubs.
On 2015/09/15 23:18:51, sunnyps wrote: > https://codereview.chromium.org/1336623004/diff/80001/content/common/gpu/gpu_... > File content/common/gpu/gpu_channel.cc (right): > > https://codereview.chromium.org/1336623004/diff/80001/content/common/gpu/gpu_... > content/common/gpu/gpu_channel.cc:859: << " with type " << message.type(); > On 2015/09/15 23:05:11, piman (slow to review) wrote: > > Can you add a DCHECK(!stub || stub->IsScheduled())? > > > > It may be true today that we only unschedule as the side effect of commands, > but > > that wasn't always the case. > > I don't think this DCHECK holds for cases where the last command in a command > buffer deschedules the command buffer. However we should handle that case > correctly - AsyncFlush will just early out. If the last command returns kDeferCommandUntilLater, it needs to be re-run, so I don't think it's possible to get to a state where we're unscheduled and there's no command remaining (unless there's a bug). > > Would you prefer adding an early out condition in HandleMessage instead? I have > an upcoming CL in which the stream keeps track of its "CanRun" state which is a > function of preemption flag, front message, and the IsScheduled state of all the > stubs. Looking forward to that :)
dcheng PTAL https://codereview.chromium.org/1336623004/diff/80001/content/common/gpu/gpu_... File content/common/gpu/gpu_channel.cc (right): https://codereview.chromium.org/1336623004/diff/80001/content/common/gpu/gpu_... content/common/gpu/gpu_channel.cc:859: << " with type " << message.type(); On 2015/09/15 23:18:51, sunnyps wrote: > On 2015/09/15 23:05:11, piman (slow to review) wrote: > > Can you add a DCHECK(!stub || stub->IsScheduled())? > > > > It may be true today that we only unschedule as the side effect of commands, > but > > that wasn't always the case. > > I don't think this DCHECK holds for cases where the last command in a command > buffer deschedules the command buffer. However we should handle that case > correctly - AsyncFlush will just early out. > > Would you prefer adding an early out condition in HandleMessage instead? I have > an upcoming CL in which the stream keeps track of its "CanRun" state which is a > function of preemption flag, front message, and the IsScheduled state of all the > stubs. Done. https://codereview.chromium.org/1336623004/diff/80001/content/common/gpu/gpu_... File content/common/gpu/gpu_command_buffer_stub.cc (right): https://codereview.chromium.org/1336623004/diff/80001/content/common/gpu/gpu_... content/common/gpu/gpu_command_buffer_stub.cc:788: DVLOG(0) << "Received a Flush message out-of-order"; On 2015/09/15 23:05:11, piman (slow to review) wrote: > nit: use DVLOG_IF Done.
lgtm
lgtm
https://codereview.chromium.org/1336623004/diff/100001/content/common/gpu/gpu... File content/common/gpu/gpu_channel.cc (right): https://codereview.chromium.org/1336623004/diff/100001/content/common/gpu/gpu... content/common/gpu/gpu_channel.cc:115: DCHECK_EQ(message.type(), GpuCommandBufferMsg_InsertSyncPoint::ID); For consistency with gtest, it's nice to put the expected value first. https://codereview.chromium.org/1336623004/diff/100001/content/common/gpu/gpu... content/common/gpu/gpu_channel.cc:121: GpuChannelMessage* msg = new GpuChannelMessage(order_number, message); Just make this a scoped_ptr. https://codereview.chromium.org/1336623004/diff/100001/content/common/gpu/gpu... content/common/gpu/gpu_channel.cc:180: GpuChannelMessage* msg = out_of_order_messages_.front(); Just make |msg| here and above a scoped_ptr<> and get rid of the explicit delete. https://codereview.chromium.org/1336623004/diff/100001/content/common/gpu/gpu... content/common/gpu/gpu_channel.cc:200: GpuChannelMessage* msg = channel_messages_.front(); Ditto. https://codereview.chromium.org/1336623004/diff/100001/content/common/gpu/gpu... content/common/gpu/gpu_channel.cc:854: IPC::Message& message = m->message; Why does this need to be a mutable ref? https://codereview.chromium.org/1336623004/diff/100001/content/common/gpu/gpu... File content/common/gpu/gpu_channel.h (right): https://codereview.chromium.org/1336623004/diff/100001/content/common/gpu/gpu... content/common/gpu/gpu_channel.h:407: }; DISALLOW_COPY_AND_ASSIGN? Since it looks like the code goes to some effort to avoid copying these around, may as well make it explicit.
PTAL https://codereview.chromium.org/1336623004/diff/100001/content/common/gpu/gpu... File content/common/gpu/gpu_channel.cc (right): https://codereview.chromium.org/1336623004/diff/100001/content/common/gpu/gpu... content/common/gpu/gpu_channel.cc:115: DCHECK_EQ(message.type(), GpuCommandBufferMsg_InsertSyncPoint::ID); On 2015/09/17 00:15:18, dcheng wrote: > For consistency with gtest, it's nice to put the expected value first. Done. https://codereview.chromium.org/1336623004/diff/100001/content/common/gpu/gpu... content/common/gpu/gpu_channel.cc:121: GpuChannelMessage* msg = new GpuChannelMessage(order_number, message); On 2015/09/17 00:15:18, dcheng wrote: > Just make this a scoped_ptr. Done. https://codereview.chromium.org/1336623004/diff/100001/content/common/gpu/gpu... content/common/gpu/gpu_channel.cc:180: GpuChannelMessage* msg = out_of_order_messages_.front(); On 2015/09/17 00:15:18, dcheng wrote: > Just make |msg| here and above a scoped_ptr<> and get rid of the explicit > delete. Done. https://codereview.chromium.org/1336623004/diff/100001/content/common/gpu/gpu... content/common/gpu/gpu_channel.cc:200: GpuChannelMessage* msg = channel_messages_.front(); On 2015/09/17 00:15:18, dcheng wrote: > Ditto. Done. https://codereview.chromium.org/1336623004/diff/100001/content/common/gpu/gpu... File content/common/gpu/gpu_channel.h (right): https://codereview.chromium.org/1336623004/diff/100001/content/common/gpu/gpu... content/common/gpu/gpu_channel.h:407: }; On 2015/09/17 00:15:18, dcheng wrote: > DISALLOW_COPY_AND_ASSIGN? Since it looks like the code goes to some effort to > avoid copying these around, may as well make it explicit. Done.
lgtm
https://codereview.chromium.org/1336623004/diff/120001/content/common/gpu/gpu... File content/common/gpu/gpu_channel.cc (right): https://codereview.chromium.org/1336623004/diff/120001/content/common/gpu/gpu... content/common/gpu/gpu_channel.cc:121: scoped_ptr<GpuChannelMessage> msg = Forgot to add: here and elsewhere, just do scoped_ptr<GpuChannelMessage> msg(new GpuChannelMessage(...)); make_scoped_ptr isn't really needed here.
https://codereview.chromium.org/1336623004/diff/120001/content/common/gpu/gpu... File content/common/gpu/gpu_channel.cc (right): https://codereview.chromium.org/1336623004/diff/120001/content/common/gpu/gpu... content/common/gpu/gpu_channel.cc:174: scoped_ptr<GpuChannelMessage> msg = Since all the scoped_ptrs here are just used in the case of popping the front, perhaps it would be easier to just make the vector a ScopedVector, that way all pops just delete it automatically. We don't have to worry about calling the right Release() and Pass() calls. https://codereview.chromium.org/1336623004/diff/120001/content/common/gpu/gpu... content/common/gpu/gpu_channel.cc:221: scoped_ptr<GpuChannelMessage> msg) { This scoped_ptr doesn't seem necessary, all the callers use make_scoped_ptr before and this function simply releases it in all paths.
https://codereview.chromium.org/1336623004/diff/120001/content/common/gpu/gpu... File content/common/gpu/gpu_channel.cc (right): https://codereview.chromium.org/1336623004/diff/120001/content/common/gpu/gpu... content/common/gpu/gpu_channel.cc:174: scoped_ptr<GpuChannelMessage> msg = On 2015/09/17 17:04:27, David Yen wrote: > Since all the scoped_ptrs here are just used in the case of popping the front, > perhaps it would be easier to just make the vector a ScopedVector, that way all > pops just delete it automatically. We don't have to worry about calling the > right Release() and Pass() calls. I considered that but ScopedVector is a part of cc so I can't add that here. Personally I don't think it's a big deal if we copy the GpuChannelMessage struct but I wanted to keep the code as it is and address dcheng's comments about making the lifetimes explicit. https://codereview.chromium.org/1336623004/diff/120001/content/common/gpu/gpu... content/common/gpu/gpu_channel.cc:221: scoped_ptr<GpuChannelMessage> msg) { On 2015/09/17 17:04:27, David Yen wrote: > This scoped_ptr doesn't seem necessary, all the callers use make_scoped_ptr > before and this function simply releases it in all paths. Conceptually this function takes ownership of the message. We only use release because we don't have ScopedVector and have to make do with a vector of pointers. If we had a ScopedVector we would be passing ownership into it.
dyen@ just pointed out that there's a ScopedVector implementation in base. I'll update the CL to use that.
After discussing offline with dyen@ we decided not to use ScopedVector because what we really need is a ScopedPtrDeque which isn't in base. std::deque of pointers is ok for now. https://codereview.chromium.org/1336623004/diff/120001/content/common/gpu/gpu... File content/common/gpu/gpu_channel.cc (right): https://codereview.chromium.org/1336623004/diff/120001/content/common/gpu/gpu... content/common/gpu/gpu_channel.cc:121: scoped_ptr<GpuChannelMessage> msg = On 2015/09/17 00:41:31, dcheng wrote: > Forgot to add: here and elsewhere, just do > scoped_ptr<GpuChannelMessage> msg(new GpuChannelMessage(...)); > > make_scoped_ptr isn't really needed here. Done.
The CQ bit was checked by sunnyps@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from jbauman@chromium.org, piman@chromium.org, dcheng@chromium.org Link to the patchset: https://codereview.chromium.org/1336623004/#ps160001 (title: "dcheng's final comments")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1336623004/160001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1336623004/160001
Message was sent while issue was closed.
Committed patchset #9 (id:160001)
Message was sent while issue was closed.
Patchset 9 (id:??) landed as https://crrev.com/4a6a3d8c391412023e86a755fd35840baa22e578 Cr-Commit-Position: refs/heads/master@{#349584} |