|
|
Created:
5 years, 1 month ago by Julien Isorce Samsung Modified:
5 years, 1 month ago CC:
chromium-reviews, darin-cc_chromium.org, jam, piman+watch_chromium.org Base URL:
https://chromium.googlesource.com/chromium/src.git@master Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
DescriptionOn Linux desktop, when "use_virtualized_gl_context" is true
and when --use-gl=egl then the browser is showing garbage.
Indeed in GpuCommandBufferStub::OnInitialize the call to
context->Initialize(surface_.get(), gpu_preference_)" fails.
The error actually comes from eglMakeContext which returns
EGL_BAD_MATCH. Because surface config is not compatible with
context config.
Problem is that EGL_BUFFER_SIZE is always 32 for off screen
surfaces (see gl_surface_egl.cc::InitializeOneOff). Whereas
for on screen surfaces EGL_BUFFER_SIZE is the window depth.
This depth is by default 24 unless --enable-transparent-visuals
is passed to the command line.
When using mesa the error is raised here:
/* If the context has a config then it must match that of the two
* surfaces */
if (ctx->Config) {
if ((draw && draw->Config != ctx->Config) ||
(read && read->Config != ctx->Config))
return _eglError(EGL_BAD_MATCH, "eglMakeCurrent");
from:
cgit.freedesktop.org/mesa/mesa/tree/src/egl/main/eglcontext.c#n630
This patch pass the main window depth from browser to gpu process
though a new switch kWindowDepth.
This allow GLSurfaceEGL::InitializeOneOff to select an EGLConfig
that matches with future ON screen surfaces's EGLConfig.
Also note that long term plan is to always enable transparent visual,
see http://crbug.com/369209.
This patch also move the kEnableTransparentVisuals switch from view
to x11 switches as it is defined to x11 only.
BUG=557389
R=jbauman@chromium.org, hendrikw@chromium.org, piman@chromium.org, sadrul@chromium.org, cwallez1@chromium.org, kbr@chromium.org
Review URL: https://codereview.chromium.org/1429083002
TEST= chrome --use-gl=egl --use_virtualized_gl_contexts
Committed: https://crrev.com/5ade848cf56941fa2a61cd1c0e2cb9d7349a9194
Cr-Commit-Position: refs/heads/master@{#361114}
Patch Set 1 #Patch Set 2 : #Patch Set 3 : #
Total comments: 2
Patch Set 4 : #
Total comments: 1
Patch Set 5 : #
Total comments: 8
Patch Set 6 : #
Total comments: 4
Patch Set 7 : #Patch Set 8 : #
Total comments: 4
Patch Set 9 : #Patch Set 10 : #
Messages
Total messages: 72 (21 generated)
On 2015/10/30 14:59:42, jbauman wrote: > This won't work in the case (can happen on startup with tab restore, or > otherwise) that the first GpuCommandBufferStub that's created is offscreen. It > could also cause issues if all the onscreen windows go away and it's forced to > use the default offscreen surface instead. > > Instead we should try to make the offscreen surface use the same config as the > onscreen context, similar to UnmappedNativeViewGLSurfaceGLX for GLX. All right make sense. In order to provide a patch that follows your suggestion I have some questions: The problem I see with UnmappedNativeViewGLSurfaceGLX::Initialize() is that the dummy window may not have the same depth as the mapped window. Indeed the Root window is always 24 on X11 (if the X11 driver follows the spec). But the mapped window (=browser window) is 32 depth when passing "--enable-transparent-visuals" to command line. (see ui/views/widget/desktop_aura/desktop_window_tree_host_x11.cc) So UnmappedNativeViewGLSurfaceGLX::Initialize() should not rely on the root window which is always 24 depth. That's why I think for egl it should not be done the same way as UnmappedNativeViewGLSurfaceGLX. Instead I think the browser win depth should be sent at GPU Process creation time. So that "GLSurfaceEGL::InitializeOneOff" (which initializeui/gl/gl_surface_egl.cc::g_config) could use the same EGL_BUFFER_SIZE as the win depth. This oneoff is called from gpu_main. Also instead of sending the depth maybe just checking GLSurfaceEGL::InitializeOneOff in "switches::kEnableTransparentVisuals" is enough. I mean if true then EGL_BUFFER_SIZE:32 else EGL_BUFFER_SIZE:24. What do you think ?
ping ?
On 2015/11/02 15:17:41, j.isorce wrote: > On 2015/10/30 14:59:42, jbauman wrote: > > This won't work in the case (can happen on startup with tab restore, or > > otherwise) that the first GpuCommandBufferStub that's created is offscreen. It > > could also cause issues if all the onscreen windows go away and it's forced to > > use the default offscreen surface instead. > > > > Instead we should try to make the offscreen surface use the same config as the > > onscreen context, similar to UnmappedNativeViewGLSurfaceGLX for GLX. > > All right make sense. > > In order to provide a patch that follows your suggestion I have some questions: > > The problem I see with UnmappedNativeViewGLSurfaceGLX::Initialize() is that the > dummy window may not have the same depth as the mapped window. Indeed the Root > window is always 24 on X11 (if the X11 driver follows the spec). But the mapped > window (=browser window) is 32 depth when passing "--enable-transparent-visuals" > to command line. (see > ui/views/widget/desktop_aura/desktop_window_tree_host_x11.cc) > > So UnmappedNativeViewGLSurfaceGLX::Initialize() should not rely on the root > window which is always 24 depth. > > That's why I think for egl it should not be done the same way as > UnmappedNativeViewGLSurfaceGLX. > > Instead I think the browser win depth should be sent at GPU Process creation > time. So that "GLSurfaceEGL::InitializeOneOff" (which > initializeui/gl/gl_surface_egl.cc::g_config) could use the same EGL_BUFFER_SIZE > as the win depth. This oneoff is called from gpu_main. > > Also instead of sending the depth maybe just checking > GLSurfaceEGL::InitializeOneOff in "switches::kEnableTransparentVisuals" is > enough. I mean if true then EGL_BUFFER_SIZE:32 else EGL_BUFFER_SIZE:24. > > What do you think ? That seems reasonable.
On 2015/11/04 21:12:51, jbauman wrote: > On 2015/11/02 15:17:41, j.isorce wrote: > > On 2015/10/30 14:59:42, jbauman wrote: > > > This won't work in the case (can happen on startup with tab restore, or > > > otherwise) that the first GpuCommandBufferStub that's created is offscreen. > It > > > could also cause issues if all the onscreen windows go away and it's forced > to > > > use the default offscreen surface instead. > > > > > > Instead we should try to make the offscreen surface use the same config as > the > > > onscreen context, similar to UnmappedNativeViewGLSurfaceGLX for GLX. > > > > All right make sense. > > > > In order to provide a patch that follows your suggestion I have some > questions: > > > > The problem I see with UnmappedNativeViewGLSurfaceGLX::Initialize() is that > the > > dummy window may not have the same depth as the mapped window. Indeed the Root > > window is always 24 on X11 (if the X11 driver follows the spec). But the > mapped > > window (=browser window) is 32 depth when passing > "--enable-transparent-visuals" > > to command line. (see > > ui/views/widget/desktop_aura/desktop_window_tree_host_x11.cc) > > > > So UnmappedNativeViewGLSurfaceGLX::Initialize() should not rely on the root > > window which is always 24 depth. > > > > That's why I think for egl it should not be done the same way as > > UnmappedNativeViewGLSurfaceGLX. > > > > Instead I think the browser win depth should be sent at GPU Process creation > > time. So that "GLSurfaceEGL::InitializeOneOff" (which > > initializeui/gl/gl_surface_egl.cc::g_config) could use the same > EGL_BUFFER_SIZE > > as the win depth. This oneoff is called from gpu_main. > > > > Also instead of sending the depth maybe just checking > > GLSurfaceEGL::InitializeOneOff in "switches::kEnableTransparentVisuals" is > > enough. I mean if true then EGL_BUFFER_SIZE:32 else EGL_BUFFER_SIZE:24. > > > > What do you think ? > > That seems reasonable. Before trying this, you might want to check if we can run X11 in 16bit. jbauman also mentioned there's 10bit per channel. Either of these will mess with your plan.
On 2015/11/05 01:26:16, hendrikw wrote: > On 2015/11/04 21:12:51, jbauman wrote: > > On 2015/11/02 15:17:41, j.isorce wrote: > > > On 2015/10/30 14:59:42, jbauman wrote: > > > > This won't work in the case (can happen on startup with tab restore, or > > > > otherwise) that the first GpuCommandBufferStub that's created is > offscreen. > > It > > > > could also cause issues if all the onscreen windows go away and it's > forced > > to > > > > use the default offscreen surface instead. > > > > > > > > Instead we should try to make the offscreen surface use the same config as > > the > > > > onscreen context, similar to UnmappedNativeViewGLSurfaceGLX for GLX. > > > > > > All right make sense. > > > > > > In order to provide a patch that follows your suggestion I have some > > questions: > > > > > > The problem I see with UnmappedNativeViewGLSurfaceGLX::Initialize() is that > > the > > > dummy window may not have the same depth as the mapped window. Indeed the > Root > > > window is always 24 on X11 (if the X11 driver follows the spec). But the > > mapped > > > window (=browser window) is 32 depth when passing > > "--enable-transparent-visuals" > > > to command line. (see > > > ui/views/widget/desktop_aura/desktop_window_tree_host_x11.cc) > > > > > > So UnmappedNativeViewGLSurfaceGLX::Initialize() should not rely on the root > > > window which is always 24 depth. > > > > > > That's why I think for egl it should not be done the same way as > > > UnmappedNativeViewGLSurfaceGLX. > > > > > > Instead I think the browser win depth should be sent at GPU Process creation > > > time. So that "GLSurfaceEGL::InitializeOneOff" (which > > > initializeui/gl/gl_surface_egl.cc::g_config) could use the same > > EGL_BUFFER_SIZE > > > as the win depth. This oneoff is called from gpu_main. > > > > > > Also instead of sending the depth maybe just checking > > > GLSurfaceEGL::InitializeOneOff in "switches::kEnableTransparentVisuals" is > > > enough. I mean if true then EGL_BUFFER_SIZE:32 else EGL_BUFFER_SIZE:24. > > > > > > What do you think ? > > > > That seems reasonable. > > Before trying this, you might want to check if we can run X11 in 16bit. jbauman > also mentioned there's 10bit per channel. Either of these will mess with your > plan. I'm not sure how well the current code works with 16bpp, as it's hardcoded to 8-bit red, green, and blue channels. Looks like it depends on the driver what happens if the buffer size doesn't match the window depth. I wouldn't worry about it too much, in any case.
On 2015/11/05 01:47:14, jbauman wrote: > On 2015/11/05 01:26:16, hendrikw wrote: > > On 2015/11/04 21:12:51, jbauman wrote: > > > > > > That seems reasonable. > > > > Before trying this, you might want to check if we can run X11 in 16bit. > jbauman > > also mentioned there's 10bit per channel. Either of these will mess with your > > plan. > > I'm not sure how well the current code works with 16bpp, as it's hardcoded to > 8-bit red, green, and blue channels. Looks like it depends on the driver what > happens if the buffer size doesn't match the window depth. I wouldn't worry > about it too much, in any case. Hi thx for the remarks. Yes X11 can run in 16 bit (I was a bit too much categoric when I said "Root window is always 24 on X11"), this is case by default on the Raspberry-Pi board. (Though the glx drivers is still under development but it is another story). I actually was involved on improving the Epiphany browser on RPI (WebKit1 based and without GL, see http://blog.barisione.org/2014-09/rpi-browser/ ) and X11 was running in 16bit (RGB565 to be precise). So you are right, relying on switches::kEnableTransparentVisuals is not a good idea since it is not guarantee to have a RGBA visual (i.e. ui/views/widget/desktop_aura/desktop_window_tree_host_x11.cc::GetARGBVisual() can return nullptr). Also I am not keen to modify "UnmappedNativeViewGLSurfaceGLX::Initialize" since --use-gl=desktop (i.e. glx in my case) +/- "--enable-transparent-visuals" works even if --use_virtualized_gl_contexts is ensured. So I guess having different depth for offscreen surface (if it is UnmappedNativeViewGLSurfaceGLX) and "OnScreen" surface, does not matter much with glx + virtualized gl context. In the first place what is not working on a X11 desktop with Root window being 24 depth is: ./out/Release/chrome --use-gl=egl --use_virtualized_gl_contexts (but works if you add --enable-transparent-visuals) So would it be ok if for now I just add a new cmdline switch "kMainWinDepth" (ref "the browser win depth should be sent at GPU Process creation"), and set it in ui/views/widget/desktop_aura/desktop_window_tree_host_x11.cc::InitX11Window. Then in gl_surface_egl.cc::InitializeOneOff I retrieve the value of kMainWinDepth and run same logic as gl_surface_egl.cc::NativeViewGLSurfaceEGL::GetConfig() instead of current hard-coded logic in InitializeOneOff. (doing a bit of refactoring at the same time) It won't solve 16bpp problem but at least the initial problem would be fix and also the kMainWinDepth would allow one to support 16bpp if interested in a such quest. What do you think ?
On 2015/11/05 09:11:53, j.isorce wrote: > On 2015/11/05 01:47:14, jbauman wrote: > > On 2015/11/05 01:26:16, hendrikw wrote: > > > On 2015/11/04 21:12:51, jbauman wrote: > > > > > > > > That seems reasonable. > > > > > > Before trying this, you might want to check if we can run X11 in 16bit. > > jbauman > > > also mentioned there's 10bit per channel. Either of these will mess with > your > > > plan. > > > > I'm not sure how well the current code works with 16bpp, as it's hardcoded to > > 8-bit red, green, and blue channels. Looks like it depends on the driver what > > happens if the buffer size doesn't match the window depth. I wouldn't worry > > about it too much, in any case. > > Hi thx for the remarks. Yes X11 can run in 16 bit (I was a bit too much > categoric when I said "Root window is always 24 on X11"), this is case by > default on the Raspberry-Pi board. (Though the glx drivers is still under > development but it is another story). I actually was involved on improving the > Epiphany browser on RPI (WebKit1 based and without GL, see > http://blog.barisione.org/2014-09/rpi-browser/ ) and X11 was running in 16bit > (RGB565 to be precise). > > So you are right, relying on switches::kEnableTransparentVisuals is not a good > idea since it is not guarantee to have a RGBA visual (i.e. > ui/views/widget/desktop_aura/desktop_window_tree_host_x11.cc::GetARGBVisual() > can return nullptr). > > Also I am not keen to modify "UnmappedNativeViewGLSurfaceGLX::Initialize" since > --use-gl=desktop (i.e. glx in my case) +/- "--enable-transparent-visuals" works > even if --use_virtualized_gl_contexts is ensured. So I guess having different > depth for offscreen surface (if it is UnmappedNativeViewGLSurfaceGLX) and > "OnScreen" surface, does not matter much with glx + virtualized gl context. > > In the first place what is not working on a X11 desktop with Root window being > 24 depth is: ./out/Release/chrome --use-gl=egl --use_virtualized_gl_contexts > (but works if you add --enable-transparent-visuals) > > So would it be ok if for now I just add a new cmdline switch "kMainWinDepth" > (ref "the browser win depth should be sent at GPU Process creation"), and set it > in ui/views/widget/desktop_aura/desktop_window_tree_host_x11.cc::InitX11Window. > Then in gl_surface_egl.cc::InitializeOneOff I retrieve the value of > kMainWinDepth and run same logic as > gl_surface_egl.cc::NativeViewGLSurfaceEGL::GetConfig() instead of current > hard-coded logic in InitializeOneOff. (doing a bit of refactoring at the same > time) > > It won't solve 16bpp problem but at least the initial problem would be fix and > also the kMainWinDepth would allow one to support 16bpp if interested in a such > quest. > What do you think ? I'd just go with the 24 vs 32 approach and log an issue on it capturing what you've discovered.
On 2015/11/05 09:11:53, j.isorce wrote: > On 2015/11/05 01:47:14, jbauman wrote: > > On 2015/11/05 01:26:16, hendrikw wrote: > > > On 2015/11/04 21:12:51, jbauman wrote: > > > > > > > > That seems reasonable. > > > > > > Before trying this, you might want to check if we can run X11 in 16bit. > > jbauman > > > also mentioned there's 10bit per channel. Either of these will mess with > your > > > plan. > > > > I'm not sure how well the current code works with 16bpp, as it's hardcoded to > > 8-bit red, green, and blue channels. Looks like it depends on the driver what > > happens if the buffer size doesn't match the window depth. I wouldn't worry > > about it too much, in any case. > > Hi thx for the remarks. Yes X11 can run in 16 bit (I was a bit too much > categoric when I said "Root window is always 24 on X11"), this is case by > default on the Raspberry-Pi board. (Though the glx drivers is still under > development but it is another story). I actually was involved on improving the > Epiphany browser on RPI (WebKit1 based and without GL, see > http://blog.barisione.org/2014-09/rpi-browser/ ) and X11 was running in 16bit > (RGB565 to be precise). > > So you are right, relying on switches::kEnableTransparentVisuals is not a good > idea since it is not guarantee to have a RGBA visual (i.e. > ui/views/widget/desktop_aura/desktop_window_tree_host_x11.cc::GetARGBVisual() > can return nullptr). > > Also I am not keen to modify "UnmappedNativeViewGLSurfaceGLX::Initialize" since > --use-gl=desktop (i.e. glx in my case) +/- "--enable-transparent-visuals" works > even if --use_virtualized_gl_contexts is ensured. So I guess having different > depth for offscreen surface (if it is UnmappedNativeViewGLSurfaceGLX) and > "OnScreen" surface, does not matter much with glx + virtualized gl context. > > In the first place what is not working on a X11 desktop with Root window being > 24 depth is: ./out/Release/chrome --use-gl=egl --use_virtualized_gl_contexts > (but works if you add --enable-transparent-visuals) > > So would it be ok if for now I just add a new cmdline switch "kMainWinDepth" > (ref "the browser win depth should be sent at GPU Process creation"), and set it > in ui/views/widget/desktop_aura/desktop_window_tree_host_x11.cc::InitX11Window. > Then in gl_surface_egl.cc::InitializeOneOff I retrieve the value of > kMainWinDepth and run same logic as > gl_surface_egl.cc::NativeViewGLSurfaceEGL::GetConfig() instead of current > hard-coded logic in InitializeOneOff. (doing a bit of refactoring at the same > time) > I'm not sure that would work, as InitX11Window is generally run after the GPU process is created, so it can't modify its command-line flags. I'm still fine with keying off of switches::kEnableTransparentVisuals, though. > It won't solve 16bpp problem but at least the initial problem would be fix and > also the kMainWinDepth would allow one to support 16bpp if interested in a such > quest. > What do you think ?
On 2015/11/06 00:41:11, jbauman wrote: > I'm still fine with keying off of switches::kEnableTransparentVisuals, though. Hi I uploaded "Patch Set 2" that relies on kEnableTransparentVisuals (how to re-upload commit message since I changed it) Also note that I had to add views_switches.cc in one of the gyp files instead of target name views.gyp:views like in the other gyp. Indeed it detects a cycle. (Initial problem is that kEnableTransparentVisuals is undefined reference.) Also note that in views_switches.cc we can read: // TODO(erg): Remove this switch once we've stabilized the code // path. http://crbug.com/369209 const char kEnableTransparentVisuals[] = "enable-transparent-visuals"; So I am a bit confused. Any idea ?
On 2015/11/06 14:13:35, j.isorce wrote: > On 2015/11/06 00:41:11, jbauman wrote: > > I'm still fine with keying off of switches::kEnableTransparentVisuals, though. > > Hi I uploaded "Patch Set 2" that relies on kEnableTransparentVisuals (how to > re-upload commit message since I changed it) Hi! In this web interface, click "Edit Issue", it will let you modify the commit message. Can you also explain why you need this change? Are you working on something that requires transparent visuals? > Also note that I had to add views_switches.cc in one of the gyp files instead of > target name views.gyp:views like in the other gyp. Indeed it detects a cycle. > (Initial problem is that kEnableTransparentVisuals is undefined reference.) Hmm, Adding these dependencies doesn't seem right. Maybe the option declaration should be moved? > > Also note that in views_switches.cc we can read: > // TODO(erg): Remove this switch once we've stabilized the code > // path. http://crbug.com/369209 > const char kEnableTransparentVisuals[] = "enable-transparent-visuals"; > > So I am a bit confused. Any idea ? From what I can tell, the plan is to always enable transparent visuals, but since there are bugs, it's disabled by default. You can reach out to erg for more info.
On 2015/11/06 16:24:47, hendrikw wrote: > On 2015/11/06 14:13:35, j.isorce wrote: > > On 2015/11/06 00:41:11, jbauman wrote: > > > I'm still fine with keying off of switches::kEnableTransparentVisuals, > though. > > > > Hi I uploaded "Patch Set 2" that relies on kEnableTransparentVisuals (how to > > re-upload commit message since I changed it) > > Hi! Hi > > In this web interface, click "Edit Issue", it will let you modify the commit > message. Ah thx! > > Can you also explain why you need this change? Are you working on something > that requires transparent visuals? I have no requirement on transparency but I am working on a use case which requires real egl/gles2 (on gpu process side I mean). Basically I need --use-gl=egl. The problem is that "chrome --use-gl=egl --use_virtualized_gl_context" is failing on linux with mesa, see commit msg. (and I suppose on any driver since having the same EGLConfig is in the spec). If you additionally pass --enable-transparent-visuals then it works. (because both are 32) > > > Also note that I had to add views_switches.cc in one of the gyp files instead > of > > target name views.gyp:views like in the other gyp. Indeed it detects a cycle. > > (Initial problem is that kEnableTransparentVisuals is undefined reference.) > > Hmm, Adding these dependencies doesn't seem right. Maybe the option declaration > should be moved? Good idea, should I move it to ui/base/ui_base_switches.h ? > > > > > Also note that in views_switches.cc we can read: > > // TODO(erg): Remove this switch once we've stabilized the code > > // path. http://crbug.com/369209 > > const char kEnableTransparentVisuals[] = "enable-transparent-visuals"; > > > > So I am a bit confused. Any idea ? > > From what I can tell, the plan is to always enable transparent visuals, but > since there are bugs, it's disabled by default. You can reach out to erg for > more info. Ok makes sense. I am actually interested to help on this. In the mean time what to do about the current CL ? Thx Julien
Description was changed from ========== GpuCommandBufferStub: ensure to use compatible EGLConfigs When "use_virtualized_gl_context" is true GLContext::CreateGLContext is always called with DefaultOffscreenSurface as parameter. Instead it should use the surface that is going to be used to make the context current. Otherwise eglMakeCurrent will fail when using --use-gl=egl on Linux. The patch applies the same logic as when "use_virtualized_gl_context" is false. The call to "context->Initialize(surface_.get(), gpu_preference_)" fails if the surface is of type on screen. The error actually comes from eglMakeContext which returns EGL_BAD_MATCH. Because surface config is not compatible with context config. Indeed EGLConfig for off screen context is 32 depth because channel_->gpu_channel_manager()->GetDefaultOffscreenSurface()->GetConfig() always return an EGLConfig of depth 32. See "EGL_BUFFER_SIZE, 32" in gl_surface_egl.cc::GLSurfaceEGL::InitializeOneOff(). Whereas EGLConfig for on screen context will likely select 24 depth. Like the default depth for a window on Linux. In any case tis patch ensure that the same EGLConfig is used when creating the context and when making it current. When using mesa the error is raised here: /* If the context has a config then it must match that of the two * surfaces */ if (ctx->Config) { if ((draw && draw->Config != ctx->Config) || (read && read->Config != ctx->Config)) return _eglError(EGL_BAD_MATCH, "eglMakeCurrent"); from: cgit.freedesktop.org/mesa/mesa/tree/src/egl/main/eglcontext.c#n630 Note that this patch might help with issue reported with commit "gpu: Enable virtual contexts with nvidia drivers only" 9090371ec632dfa86daf43cc8b5309938d238e17 BUG= R=jbauman@chromium.org, hendrikw@chromium.org TEST=On Linux desktop run ./out/Debug/chrome --use-gl=egl ========== to ========== On Linux desktop, when "use_virtualized_gl_context" is true and when --use-gl=egl then the browser is showing garbage. Indeed in GpuCommandBufferStub::OnInitialize the call to context->Initialize(surface_.get(), gpu_preference_)" fails. The error actually comes from eglMakeContext which returns EGL_BAD_MATCH. Because surface config is not compatible with context config. Problem is that EGL_BUFFER_SIZE is always 32 for off screen surfaces (see gl_surface_egl.cc::InitializeOneOff). Whereas for on screen surfaces EGL_BUFFER_SIZE is the window depth. This depth is by default 24 unless --enable-transparent-visuals is passed to the command line. When using mesa the error is raised here: /* If the context has a config then it must match that of the two * surfaces */ if (ctx->Config) { if ((draw && draw->Config != ctx->Config) || (read && read->Config != ctx->Config)) return _eglError(EGL_BAD_MATCH, "eglMakeCurrent"); from: cgit.freedesktop.org/mesa/mesa/tree/src/egl/main/eglcontext.c#n630 Also note this is a temporary fix, in the switch definition there is: " // TODO(erg): Remove this switch once we've stabilized the code // path. http://crbug.com/369209 const char kEnableTransparentVisuals[] = "enable-transparent-visuals"; " Indeed the long term solution is to always enable transparent visual, see http://crbug.com/369209 This patch also move the kEnableTransparentVisuals switch from view to x11 switches otherwise it leads to dependencies cycle in gyp. BUG= R=jbauman@chromium.org, hendrikw@chromium.org, piman@chromium.org Review URL: https://codereview.chromium.org/1429083002 TEST= chrome --use-gl=egl --use_virtualized_gl_contexts ==========
Description was changed from ========== On Linux desktop, when "use_virtualized_gl_context" is true and when --use-gl=egl then the browser is showing garbage. Indeed in GpuCommandBufferStub::OnInitialize the call to context->Initialize(surface_.get(), gpu_preference_)" fails. The error actually comes from eglMakeContext which returns EGL_BAD_MATCH. Because surface config is not compatible with context config. Problem is that EGL_BUFFER_SIZE is always 32 for off screen surfaces (see gl_surface_egl.cc::InitializeOneOff). Whereas for on screen surfaces EGL_BUFFER_SIZE is the window depth. This depth is by default 24 unless --enable-transparent-visuals is passed to the command line. When using mesa the error is raised here: /* If the context has a config then it must match that of the two * surfaces */ if (ctx->Config) { if ((draw && draw->Config != ctx->Config) || (read && read->Config != ctx->Config)) return _eglError(EGL_BAD_MATCH, "eglMakeCurrent"); from: cgit.freedesktop.org/mesa/mesa/tree/src/egl/main/eglcontext.c#n630 Also note this is a temporary fix, in the switch definition there is: " // TODO(erg): Remove this switch once we've stabilized the code // path. http://crbug.com/369209 const char kEnableTransparentVisuals[] = "enable-transparent-visuals"; " Indeed the long term solution is to always enable transparent visual, see http://crbug.com/369209 This patch also move the kEnableTransparentVisuals switch from view to x11 switches otherwise it leads to dependencies cycle in gyp. BUG= R=jbauman@chromium.org, hendrikw@chromium.org, piman@chromium.org Review URL: https://codereview.chromium.org/1429083002 TEST= chrome --use-gl=egl --use_virtualized_gl_contexts ========== to ========== On Linux desktop, when "use_virtualized_gl_context" is true and when --use-gl=egl then the browser is showing garbage. Indeed in GpuCommandBufferStub::OnInitialize the call to context->Initialize(surface_.get(), gpu_preference_)" fails. The error actually comes from eglMakeContext which returns EGL_BAD_MATCH. Because surface config is not compatible with context config. Problem is that EGL_BUFFER_SIZE is always 32 for off screen surfaces (see gl_surface_egl.cc::InitializeOneOff). Whereas for on screen surfaces EGL_BUFFER_SIZE is the window depth. This depth is by default 24 unless --enable-transparent-visuals is passed to the command line. When using mesa the error is raised here: /* If the context has a config then it must match that of the two * surfaces */ if (ctx->Config) { if ((draw && draw->Config != ctx->Config) || (read && read->Config != ctx->Config)) return _eglError(EGL_BAD_MATCH, "eglMakeCurrent"); from: cgit.freedesktop.org/mesa/mesa/tree/src/egl/main/eglcontext.c#n630 Also note this is a temporary fix, in the switch definition there is: " // TODO(erg): Remove this switch once we've stabilized the code // path. http://crbug.com/369209 const char kEnableTransparentVisuals[] = "enable-transparent-visuals"; " Indeed the long term solution is to always enable transparent visual, see http://crbug.com/369209 This patch also move the kEnableTransparentVisuals switch from view to x11 switches otherwise it leads to dependencies cycle in gyp. BUG= R=jbauman@chromium.org, hendrikw@chromium.org, piman@chromium.org, sadrul@chromium.org Review URL: https://codereview.chromium.org/1429083002 TEST= chrome --use-gl=egl --use_virtualized_gl_contexts ==========
j.isorce@samsung.com changed reviewers: + sadrul@chromium.org
I update CL to "patch set 3", it fixes "chrome --use-gl=egl --use_virtualized_gl_context" on Linux desktop, otherwise it shows garbage since it fails to make current a GL context.
https://codereview.chromium.org/1429083002/diff/40001/ui/gl/gl_surface_egl.cc File ui/gl/gl_surface_egl.cc (right): https://codereview.chromium.org/1429083002/diff/40001/ui/gl/gl_surface_egl.cc... ui/gl/gl_surface_egl.cc:302: EGLint* config_non_const = const_cast<EGLint*>(kConfigAttribs); Not a big fan of this const stuff. How about above the declaration of kConfigAttribs you do something like: EGLint buffer_size = 32; EGLint alpha_size = 8; #if defined(USE_X11) && !defined(OS_CHROMEOS) if (!base::CommandLine::ForCurrentProcess()->HasSwitch(...) { buffer_size = 24; alpha_size = 0; } #endif And then use those vars in the declaration of kConfigAttribs?
https://codereview.chromium.org/1429083002/diff/40001/ui/gl/gl_surface_egl.cc File ui/gl/gl_surface_egl.cc (right): https://codereview.chromium.org/1429083002/diff/40001/ui/gl/gl_surface_egl.cc... ui/gl/gl_surface_egl.cc:302: EGLint* config_non_const = const_cast<EGLint*>(kConfigAttribs); On 2015/11/11 23:55:46, hendrikw wrote: > Not a big fan of this const stuff. > > How about above the declaration of kConfigAttribs you do something like: > > EGLint buffer_size = 32; > EGLint alpha_size = 8; > > #if defined(USE_X11) && !defined(OS_CHROMEOS) > if (!base::CommandLine::ForCurrentProcess()->HasSwitch(...) { > buffer_size = 24; > alpha_size = 0; > } > #endif > > And then use those vars in the declaration of kConfigAttribs? Much easier to read indeed, thx. I addressed remarks and uploaded "Patch Set 4".
On 2015/11/12 11:06:49, j.isorce wrote: > https://codereview.chromium.org/1429083002/diff/40001/ui/gl/gl_surface_egl.cc > File ui/gl/gl_surface_egl.cc (right): > > https://codereview.chromium.org/1429083002/diff/40001/ui/gl/gl_surface_egl.cc... > ui/gl/gl_surface_egl.cc:302: EGLint* config_non_const = > const_cast<EGLint*>(kConfigAttribs); > On 2015/11/11 23:55:46, hendrikw wrote: > > Not a big fan of this const stuff. > > > > How about above the declaration of kConfigAttribs you do something like: > > > > EGLint buffer_size = 32; > > EGLint alpha_size = 8; > > > > #if defined(USE_X11) && !defined(OS_CHROMEOS) > > if (!base::CommandLine::ForCurrentProcess()->HasSwitch(...) { > > buffer_size = 24; > > alpha_size = 0; > > } > > #endif > > > > And then use those vars in the declaration of kConfigAttribs? > > Much easier to read indeed, thx. I addressed remarks and uploaded "Patch Set 4". LGTM (but still requires owner review)
The CQ bit was checked by j.isorce@samsung.com
The CQ bit was unchecked by j.isorce@samsung.com
The CQ bit was checked by j.isorce@samsung.com
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1429083002/60001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1429083002/60001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: chromium_presubmit on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/chromium_presub...)
On 2015/11/12 16:22:20, hendrikw wrote: > LGTM (but still requires owner review) Hi, please one owner review this CL. Thx
content/browser/gpu lgtm
Can you get a review from a ui/gl/ owner first? I can then stamp the rest in ui
j.isorce@samsung.com changed reviewers: + kbr@chromium.org
kbr@chromium.org changed reviewers: + cwallez@chromium.org, piman@chromium.org
Please file a bug about this and link it to the others. This looks OK to me since it doesn't affect Chromium's default code path, but I'd like cwallez@ to review and approve this since he is working on the EGL code path on Linux.
On 2015/11/17 19:55:18, Ken Russell wrote: > Please file a bug about this and link it to the others. > > This looks OK to me since it doesn't affect Chromium's default code path, but > I'd like cwallez@ to review and approve this since he is working on the EGL code > path on Linux. LGTM as it isn't in the default code path for now. I'm a bit worried that it could break down if the root window's depth is 32 bits, as it might be the case on Unity given Ken's Chrome's behaved correctly without enable-transparent-visuals when running on ANGLE. If you feel like it, I have a WIP change that makes ChooseARGBVisual shared between gl_surface_egl and desktop_window_tree_host_x11, using it you could query the depth from the visual directly. See https://codereview.chromium.org/1451943003 Otherwise I'll just do it when I get to landing this change.
https://codereview.chromium.org/1429083002/diff/60001/ui/gl/gl_surface_egl.cc File ui/gl/gl_surface_egl.cc (right): https://codereview.chromium.org/1429083002/diff/60001/ui/gl/gl_surface_egl.cc... ui/gl/gl_surface_egl.cc:286: switches::kEnableTransparentVisuals)) { I would prefer if we didn't rely on behavior of the browser side relative to this flag ("if that flag is set, then the browser must have created windows with 32 bit visuals otherwise it must be 24 bits"). I think we should solve the problem the same way we solved it for GLX, i.e. by using an unmapped window rather than a pbuffer, and getting a config that's compatible with it. In the long run, I think what we really need to do is pick a visual in the browser (early enough), pass that to the GPU process, and pick a config out of that.
On 2015/11/17 19:55:18, Ken Russell wrote: > Please file a bug about this and link it to the others. Done here https://code.google.com/p/chromium/issues/detail?id=557389 On 2015/11/17 20:24:04, cwallez1 wrote: > LGTM as it isn't in the default code path for now. > > I'm a bit worried that it could break down if the root window's depth is 32 > bits, as it might be the case on Unity given Ken's Chrome's behaved correctly > without enable-transparent-visuals when running on ANGLE. > Indeed. We dropped that case during the review since we considered it is not common and taking into account that the long term solution is http://crbug.com/369209. But you are right I realize now that it is too risky assumption. Thx Did Ken manually set DefaultDepth to 32 or changed default visual id in its xorg.conf ? > If you feel like it, I have a WIP change that makes ChooseARGBVisual shared > between gl_surface_egl and desktop_window_tree_host_x11, using it you could > query the depth from the visual directly. See > https://codereview.chromium.org/1451943003 Otherwise I'll just do it when I get > to landing this change. Ah great I can use it in gl_surface_egl.cc instead of checking for switches::kEnableTransparentVisuals. But I think ChooseARGBVisual should return the depth instead of just bool. (return 32 / return windowAttribs.depth). Also I think it should accept NULL param since I will not use the return visual. On 2015/11/17 20:53:52, piman (slow to review) wrote: > In the long run, I think what we really need to do is pick a visual in the > browser (early enough), pass that to the GPU process, and pick a config out of > that. I think you are perfectly right. That was actually my first attempt but I could not make it. Now with future function "ChooseARGBVisual" from cwallez I think it is possible to do it :). By calling it in BrowserMainLoop::InitializeToolkit just after gfx::GetXDisplay(). (BrowserMainRunnerImpl::Initialize calls InitializeToolkit before CreateStartupTasks which one starts the GPU Process) What about making "ChooseARGBVisual" caching the result (in a local static like for gfx::GetXDisplay and maybe using std::call_once). The first time it is called it will also append the result depth to the command line (switches::kWinDepth). cwallez, maybe you can submit your patch and then on top of it I will update my CL (1429083002) to contain: - change ChooseARGBVisual to return depth and accept NULL visual param. - cache the depth (+ std::call_once or DCHECK(always_called_from_main_thread) ?) - when caching, append switches::kWinDepth to cmdline with depth as value - early call to ChooseARGBVisual in BrowserMainLoop::InitializeToolkit - rely on switches::kWinDepth in GLSurfaceEGL::InitializeOneOff to select EGL_BUFFER_SIZE - move switches::kEnableTransparentVisuals from views_switches.cc to x11_switches, not needed anymore from build point of view (since I won't use it in gl_surface_egl.cc) but I think it still makes sense since it is for X11 only. What do you think ?
On 2015/11/17 20:53:52, piman (slow to review) wrote: > https://codereview.chromium.org/1429083002/diff/60001/ui/gl/gl_surface_egl.cc > File ui/gl/gl_surface_egl.cc (right): > > https://codereview.chromium.org/1429083002/diff/60001/ui/gl/gl_surface_egl.cc... > ui/gl/gl_surface_egl.cc:286: switches::kEnableTransparentVisuals)) { > I would prefer if we didn't rely on behavior of the browser side relative to > this flag ("if that flag is set, then the browser must have created windows with > 32 bit visuals otherwise it must be 24 bits"). I think we should solve the > problem the same way we solved it for GLX, i.e. by using an unmapped window > rather than a pbuffer, and getting a config that's compatible with it. > > In the long run, I think what we really need to do is pick a visual in the > browser (early enough), pass that to the GPU process, and pick a config out of > that. With Ken we looked at how to do that and there didn't seem to be any obvious solution, plus it adds a dependencies that might slow the startup. Using the same function to choose the visual in both the browser process and in the gpu process removes that dependency, while keeping things in sync. On 2015/11/17 23:36:58, j.isorce wrote: > On 2015/11/17 19:55:18, Ken Russell wrote: > > Please file a bug about this and link it to the others. > > Done here https://code.google.com/p/chromium/issues/detail?id=557389 > > On 2015/11/17 20:24:04, cwallez1 wrote: > > LGTM as it isn't in the default code path for now. > > > > I'm a bit worried that it could break down if the root window's depth is 32 > > bits, as it might be the case on Unity given Ken's Chrome's behaved correctly > > without enable-transparent-visuals when running on ANGLE. > > > > Indeed. We dropped that case during the review since we considered it is not > common and taking into account that the long term solution is > http://crbug.com/369209. > But you are right I realize now that it is too risky assumption. Thx > Did Ken manually set DefaultDepth to 32 or changed default visual id in its > xorg.conf ? I don't think Ken changed his Xorg.conf, he might correct me on this. > > If you feel like it, I have a WIP change that makes ChooseARGBVisual shared > > between gl_surface_egl and desktop_window_tree_host_x11, using it you could > > query the depth from the visual directly. See > > https://codereview.chromium.org/1451943003 Otherwise I'll just do it when I > get > > to landing this change. > > Ah great I can use it in gl_surface_egl.cc instead of checking for > switches::kEnableTransparentVisuals. > But I think ChooseARGBVisual should return the depth instead of just bool. > (return 32 / return windowAttribs.depth). Also I think it should accept NULL > param since I will not use the return visual. Both changes sound good (and will actually simplify implementation on my side a bit). > On 2015/11/17 20:53:52, piman (slow to review) wrote: > > In the long run, I think what we really need to do is pick a visual in the > > browser (early enough), pass that to the GPU process, and pick a config out of > > that. > > I think you are perfectly right. That was actually my first attempt but I could > not make it. > Now with future function "ChooseARGBVisual" from cwallez I think it is possible > to do it :). By calling it in BrowserMainLoop::InitializeToolkit just after > gfx::GetXDisplay(). > (BrowserMainRunnerImpl::Initialize calls InitializeToolkit before > CreateStartupTasks which one starts the GPU Process) > > What about making "ChooseARGBVisual" caching the result (in a local static like > for gfx::GetXDisplay and maybe using std::call_once). The first time it is > called it will also append the result depth to the command line > (switches::kWinDepth). > I don't know Chrome enough to comment on this, Antoine or Ken would know better. > cwallez, maybe you can submit your patch and then on top of it I will update my > CL (1429083002) to contain: > - change ChooseARGBVisual to return depth and accept NULL visual param. > - cache the depth (+ std::call_once or DCHECK(always_called_from_main_thread) ?) > - when caching, append switches::kWinDepth to cmdline with depth as value > - early call to ChooseARGBVisual in BrowserMainLoop::InitializeToolkit > - rely on switches::kWinDepth in GLSurfaceEGL::InitializeOneOff to select > EGL_BUFFER_SIZE > - move switches::kEnableTransparentVisuals from views_switches.cc to > x11_switches, not needed anymore from build point of view (since I won't use it > in gl_surface_egl.cc) but I think it still makes sense since it is for X11 only. > > What do you think ? It looks good to me but I'd be good to have the opinion of Chromium people too. I'd rather not submit my patch just yet as it is a prototype to use an ANGLE extension that isn't even landed in ANGLE yet. Either I can change it to be a pure refactoring to enable this CL (and then my extension CL), or you can pick the parts you need from the WIP CL and integrate them into this one. Whichever you want.
I agree with Antoine's point that this CL adds a deep and subtle dependency between the X11 visual selection code and EGL config selection. j.isorce, please work with cwallez@ to figure out a more principled way of doing what you want.
On 2015/11/19 02:35:41, Ken Russell wrote: > I agree with Antoine's point that this CL adds a deep and subtle dependency > between the X11 visual selection code and EGL config selection. j.isorce, > please work with cwallez@ to figure out a more principled way of doing what you > want. All right, so unless if I am not mistaken then I will apply plan I exposed in my previous comment as it follows Antoine(piman)'s "long run" recommendations. And just pick part of cwallez1's refactoring first (as he suggested. Also I think this will simplify workflow) Thx for your help.
On 2015/11/19 18:35:24, j.isorce wrote: > On 2015/11/19 02:35:41, Ken Russell wrote: > > I agree with Antoine's point that this CL adds a deep and subtle dependency > > between the X11 visual selection code and EGL config selection. j.isorce, > > please work with cwallez@ to figure out a more principled way of doing what > you > > want. > > All right, so unless if I am not mistaken then I will apply plan I exposed in my > previous comment as it follows Antoine(piman)'s "long run" recommendations. And > just pick part of cwallez1's refactoring first (as he suggested. Also I think > this will simplify workflow) > Thx for your help. Sounds good, thanks for taking the time to rework this!
Description was changed from ========== On Linux desktop, when "use_virtualized_gl_context" is true and when --use-gl=egl then the browser is showing garbage. Indeed in GpuCommandBufferStub::OnInitialize the call to context->Initialize(surface_.get(), gpu_preference_)" fails. The error actually comes from eglMakeContext which returns EGL_BAD_MATCH. Because surface config is not compatible with context config. Problem is that EGL_BUFFER_SIZE is always 32 for off screen surfaces (see gl_surface_egl.cc::InitializeOneOff). Whereas for on screen surfaces EGL_BUFFER_SIZE is the window depth. This depth is by default 24 unless --enable-transparent-visuals is passed to the command line. When using mesa the error is raised here: /* If the context has a config then it must match that of the two * surfaces */ if (ctx->Config) { if ((draw && draw->Config != ctx->Config) || (read && read->Config != ctx->Config)) return _eglError(EGL_BAD_MATCH, "eglMakeCurrent"); from: cgit.freedesktop.org/mesa/mesa/tree/src/egl/main/eglcontext.c#n630 Also note this is a temporary fix, in the switch definition there is: " // TODO(erg): Remove this switch once we've stabilized the code // path. http://crbug.com/369209 const char kEnableTransparentVisuals[] = "enable-transparent-visuals"; " Indeed the long term solution is to always enable transparent visual, see http://crbug.com/369209 This patch also move the kEnableTransparentVisuals switch from view to x11 switches otherwise it leads to dependencies cycle in gyp. BUG= R=jbauman@chromium.org, hendrikw@chromium.org, piman@chromium.org, sadrul@chromium.org Review URL: https://codereview.chromium.org/1429083002 TEST= chrome --use-gl=egl --use_virtualized_gl_contexts ========== to ========== On Linux desktop, when "use_virtualized_gl_context" is true and when --use-gl=egl then the browser is showing garbage. Indeed in GpuCommandBufferStub::OnInitialize the call to context->Initialize(surface_.get(), gpu_preference_)" fails. The error actually comes from eglMakeContext which returns EGL_BAD_MATCH. Because surface config is not compatible with context config. Problem is that EGL_BUFFER_SIZE is always 32 for off screen surfaces (see gl_surface_egl.cc::InitializeOneOff). Whereas for on screen surfaces EGL_BUFFER_SIZE is the window depth. This depth is by default 24 unless --enable-transparent-visuals is passed to the command line. When using mesa the error is raised here: /* If the context has a config then it must match that of the two * surfaces */ if (ctx->Config) { if ((draw && draw->Config != ctx->Config) || (read && read->Config != ctx->Config)) return _eglError(EGL_BAD_MATCH, "eglMakeCurrent"); from: cgit.freedesktop.org/mesa/mesa/tree/src/egl/main/eglcontext.c#n630 This patch pass the main window depth from browser to gpu process though a new switch kWindowDepth. This allow GLSurfaceEGL::InitializeOneOff to select an EGLConfig that matches with future ON screen surfaces's EGLConfig. Also note that long term plan is to always enable transparent visual, see http://crbug.com/369209. This patch also move the kEnableTransparentVisuals switch from view to x11 switches as it is defined to x11 only. BUG=557389 R=jbauman@chromium.org, hendrikw@chromium.org, piman@chromium.org, sadrul@chromium.org, cwallez1@chromium.org, kbr@chromium.org Review URL: https://codereview.chromium.org/1429083002 TEST= chrome --use-gl=egl --use_virtualized_gl_contexts ==========
On 2015/11/19 18:44:32, cwallez1 wrote: > Sounds good, thanks for taking the time to rework this! No pb, done. Please review "Patch Set 5". Thx.
https://codereview.chromium.org/1429083002/diff/80001/ui/base/x/x11_util.cc File ui/base/x/x11_util.cc (right): https://codereview.chromium.org/1429083002/diff/80001/ui/base/x/x11_util.cc#n... ui/base/x/x11_util.cc:1463: std::call_once(init_flag, ChooseARGBVisualForWindowOnce, &s_visual, &s_depth); std::call_once is a c++11 library feature that is not yet allowed by the style guide - see https://chromium-cpp.appspot.com/#library-whitelist https://codereview.chromium.org/1429083002/diff/80001/ui/base/x/x11_util_inte... File ui/base/x/x11_util_internal.h (right): https://codereview.chromium.org/1429083002/diff/80001/ui/base/x/x11_util_inte... ui/base/x/x11_util_internal.h:48: // NOTE: This function cache the results and can be called from any thread. This is not true, it can only be called from the UI thread, because that the only place where it's valid to call into the Display. https://codereview.chromium.org/1429083002/diff/80001/ui/base/x/x11_util_inte... ui/base/x/x11_util_internal.h:51: UI_BASE_EXPORT void ChooseARGBVisualForWindow(Visual** visual, int* depth); How about ChooseVisualForWindow? It's not guaranteed to be ARGB. https://codereview.chromium.org/1429083002/diff/80001/ui/gl/gl_surface_egl.cc File ui/gl/gl_surface_egl.cc (right): https://codereview.chromium.org/1429083002/diff/80001/ui/gl/gl_surface_egl.cc... ui/gl/gl_surface_egl.cc:288: DCHECK(succeed); This code may be called outside of the full chrome, e.g. in tests. kWindowDepth may not be there.
Thx for the review. I addressed remarks in "Patch Set 6". https://codereview.chromium.org/1429083002/diff/80001/ui/base/x/x11_util.cc File ui/base/x/x11_util.cc (right): https://codereview.chromium.org/1429083002/diff/80001/ui/base/x/x11_util.cc#n... ui/base/x/x11_util.cc:1463: std::call_once(init_flag, ChooseARGBVisualForWindowOnce, &s_visual, &s_depth); On 2015/11/20 01:44:41, piman (slow to review) wrote: > std::call_once is a c++11 library feature that is not yet allowed by the style > guide - see https://chromium-cpp.appspot.com/#library-whitelist Acknowledged. https://codereview.chromium.org/1429083002/diff/80001/ui/base/x/x11_util_inte... File ui/base/x/x11_util_internal.h (right): https://codereview.chromium.org/1429083002/diff/80001/ui/base/x/x11_util_inte... ui/base/x/x11_util_internal.h:48: // NOTE: This function cache the results and can be called from any thread. On 2015/11/20 01:44:41, piman (slow to review) wrote: > This is not true, it can only be called from the UI thread, because that the > only place where it's valid to call into the Display. Acknowledged. This will also make the function more consistent with the note on top of the file: "NOTE: these functions ... UI thread" https://codereview.chromium.org/1429083002/diff/80001/ui/base/x/x11_util_inte... ui/base/x/x11_util_internal.h:51: UI_BASE_EXPORT void ChooseARGBVisualForWindow(Visual** visual, int* depth); On 2015/11/20 01:44:41, piman (slow to review) wrote: > How about ChooseVisualForWindow? It's not guaranteed to be ARGB. Make sense. https://codereview.chromium.org/1429083002/diff/80001/ui/gl/gl_surface_egl.cc File ui/gl/gl_surface_egl.cc (right): https://codereview.chromium.org/1429083002/diff/80001/ui/gl/gl_surface_egl.cc... ui/gl/gl_surface_egl.cc:288: DCHECK(succeed); On 2015/11/20 01:44:41, piman (slow to review) wrote: > This code may be called outside of the full chrome, e.g. in tests. kWindowDepth > may not be there. Ah ok, thx. At minimum I'll add a if.
LGTM + 2 nits https://codereview.chromium.org/1429083002/diff/100001/ui/base/x/x11_util.cc File ui/base/x/x11_util.cc (right): https://codereview.chromium.org/1429083002/diff/100001/ui/base/x/x11_util.cc#... ui/base/x/x11_util.cc:1448: s_depth = info.depth; nit: break; https://codereview.chromium.org/1429083002/diff/100001/ui/base/x/x11_util_int... File ui/base/x/x11_util_internal.h (right): https://codereview.chromium.org/1429083002/diff/100001/ui/base/x/x11_util_int... ui/base/x/x11_util_internal.h:48: // Select a Visual with a preference for alpha support. The caller must compare nit: Selects
Patch 7 addressed last remarks. Cheers. https://codereview.chromium.org/1429083002/diff/100001/ui/base/x/x11_util.cc File ui/base/x/x11_util.cc (right): https://codereview.chromium.org/1429083002/diff/100001/ui/base/x/x11_util.cc#... ui/base/x/x11_util.cc:1448: s_depth = info.depth; On 2015/11/20 21:56:58, piman (slow to review) wrote: > nit: break; Ah right, I completely missed that one, thx. https://codereview.chromium.org/1429083002/diff/100001/ui/base/x/x11_util_int... File ui/base/x/x11_util_internal.h (right): https://codereview.chromium.org/1429083002/diff/100001/ui/base/x/x11_util_int... ui/base/x/x11_util_internal.h:48: // Select a Visual with a preference for alpha support. The caller must compare On 2015/11/20 21:56:58, piman (slow to review) wrote: > nit: Selects Acknowledged.
The CQ bit was checked by j.isorce@samsung.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1429083002/120001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1429083002/120001
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: linux_chromium_chromeos_compile_dbg_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) linux_chromium_chromeos_rel_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
On 2015/11/20 23:54:46, commit-bot: I haz the power wrote: > Dry run: Try jobs failed on following builders: > linux_chromium_chromeos_compile_dbg_ng on tryserver.chromium.linux > (JOB_FAILED, > http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) > linux_chromium_chromeos_rel_ng on tryserver.chromium.linux (JOB_FAILED, > http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) I added !defined(OS_CHROMEOS) content/gpu/gpu_main.cc in Patch Set 8.
The CQ bit was checked by j.isorce@samsung.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1429083002/140001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1429083002/140001
LGTM Taking advantage of the CQ failure to add in a couple more nits. Thank you for the large rework of this patch! https://codereview.chromium.org/1429083002/diff/140001/ui/base/x/x11_util.cc File ui/base/x/x11_util.cc (right): https://codereview.chromium.org/1429083002/diff/140001/ui/base/x/x11_util.cc#... ui/base/x/x11_util.cc:1422: // TODO(cwallez) make the switch be a constant again Since you moved the flag in x11_switches, this TODO can be easily resolved. https://codereview.chromium.org/1429083002/diff/140001/ui/base/x/x11_util_int... File ui/base/x/x11_util_internal.h (right): https://codereview.chromium.org/1429083002/diff/140001/ui/base/x/x11_util_int... ui/base/x/x11_util_internal.h:48: // Selects a Visual with a preference for alpha support. The caller must compare nit: The preference for alpha support is only on compositing window managers.
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
https://codereview.chromium.org/1429083002/diff/140001/ui/base/x/x11_util.cc File ui/base/x/x11_util.cc (right): https://codereview.chromium.org/1429083002/diff/140001/ui/base/x/x11_util.cc#... ui/base/x/x11_util.cc:1422: // TODO(cwallez) make the switch be a constant again On 2015/11/22 04:09:01, cwallez1 wrote: > Since you moved the flag in x11_switches, this TODO can be easily resolved. Ah right, thx. https://codereview.chromium.org/1429083002/diff/140001/ui/base/x/x11_util_int... File ui/base/x/x11_util_internal.h (right): https://codereview.chromium.org/1429083002/diff/140001/ui/base/x/x11_util_int... ui/base/x/x11_util_internal.h:48: // Selects a Visual with a preference for alpha support. The caller must compare On 2015/11/22 04:09:01, cwallez1 wrote: > nit: The preference for alpha support is only on compositing window managers. Acknowledged.
The CQ bit was checked by j.isorce@samsung.com
The patchset sent to the CQ was uploaded after l-g-t-m from jbauman@chromium.org, hendrikw@chromium.org, piman@chromium.org, cwallez@chromium.org Link to the patchset: https://codereview.chromium.org/1429083002/#ps160001 (title: " ")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1429083002/160001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1429083002/160001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: chromium_presubmit on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/chromium_presub...)
On 2015/11/23 08:35:56, commit-bot: I haz the power wrote: > Try jobs failed on following builders: > chromium_presubmit on tryserver.chromium.linux (JOB_FAILED, > http://build.chromium.org/p/tryserver.chromium.linux/builders/chromium_presub...) I added !defined(OS_CHROMEOS) surround of chooseVisualForWindow in Patch Set 10.
The CQ bit was checked by j.isorce@samsung.com
The patchset sent to the CQ was uploaded after l-g-t-m from jbauman@chromium.org, piman@chromium.org, cwallez@chromium.org, hendrikw@chromium.org Link to the patchset: https://codereview.chromium.org/1429083002/#ps180001 (title: " ")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1429083002/180001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1429083002/180001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: chromium_presubmit on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/chromium_presub...)
** Presubmit ERRORS ** Missing LGTM from an OWNER for these files: ui/base/x/x11_util.cc ui/base/x/x11_util_internal.h ui/gfx/x/x11_switches.cc ui/gfx/x/x11_switches.h ui/views/views_switches.cc ui/views/views_switches.h ui/views/widget/desktop_aura/desktop_window_tree_host_x11.cc ui/views/widget/desktop_aura/desktop_window_tree_host_x11.h On 2015/11/17 04:47:16, sadrul wrote: > Can you get a review from a ui/gl/ owner first? I can then stamp the rest in ui Hi Sadrul, it's your turn :) Thx
lgtm
The CQ bit was checked by j.isorce@samsung.com
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1429083002/180001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1429083002/180001
Message was sent while issue was closed.
Committed patchset #10 (id:180001)
Message was sent while issue was closed.
Patchset 10 (id:??) landed as https://crrev.com/5ade848cf56941fa2a61cd1c0e2cb9d7349a9194 Cr-Commit-Position: refs/heads/master@{#361114} |