|
|
Created:
11 years, 2 months ago by eroman Modified:
9 years, 6 months ago Reviewers:
Timur Iskhodzhanov CC:
chromium-reviews_googlegroups.com, not_the_right_glider, Nirnimesh, dank, stuartmorgan, pam+watch_chromium.org Base URL:
svn://chrome-svn/chrome/trunk/src/ Visibility:
Public. |
DescriptionRemove ThreadSanitizer suppression for 22272.
The race was specific to unit-test code (see r29826) and should be fixed now.
BUG=22272
Committed r30301.
Patch Set 1 #
Messages
Total messages: 8 (0 generated)
LGTM
Cancelling my LGTM, some race on LoadLog is still present! Here is one of the report (I use slightly more agressive ThreadSanitizer mode than the one on the buildbots) INFO: T1 has been created by T0 at this point: {{{ #0 clone /lib/libc-2.7.so #1 pthread_create@@GLIBC_2.2.5 /lib/libpthread-2.7.so #2 pthread_create@* /home/timurrrr/valgrind-patches/valgrind-10880/tsan/ts_valgrind_intercepts.c:556 #3 (anonymous namespace)::CreateThread(unsigned long, bool, PlatformThread::Delegate*, unsigned long*) base/platform_thread_posix.cc:93 #4 PlatformThread::CreateNonJoinable(unsigned long, PlatformThread::Delegate*) base/platform_thread_posix.cc:113 #5 base::LinuxDynamicThreadPool::PostTask(Task*) base/worker_pool_linux.cc:142 #6 (anonymous namespace)::WorkerPoolImpl::PostTask(tracked_objects::Location const&, Task*, bool) base/worker_pool_linux.cc:46 #7 WorkerPool::PostTask(tracked_objects::Location const&, Task*, bool) base/worker_pool_linux.cc:91 #8 net::HostResolverImpl::Job::Start() net/base/host_resolver_impl.cc:168 #9 net::HostResolverImpl::Resolve(net::HostResolver::RequestInfo const&, net::AddressList*, CallbackRunner<Tuple1<int> >*, void**, net::LoadLog*) net/base/host_resolver_impl.cc:378 #10 net::MockHostResolverBase::Resolve(net::HostResolver::RequestInfo const&, net::AddressList*, CallbackRunner<Tuple1<int> >*, void**, net::LoadLog*) net/base/mock_host_resolver.cc:54 #11 net::SingleRequestHostResolver::Resolve(net::HostResolver::RequestInfo const&, net::AddressList*, CallbackRunner<Tuple1<int> >*, net::LoadLog*) net/base/host_resolver.cc:41 #12 net::SocketStream::DoResolveHost() net/socket_stream/socket_stream.cc:372 #13 net::SocketStream::DoLoop(int) net/socket_stream/socket_stream.cc:271 #14 void DispatchToMethod<net::SocketStream, int (net::SocketStream::*)(int), net::Error>(net::SocketStream*, int (net::SocketStream::*)(int), Tuple1<net::Error> const&) base/tuple.h:422 #15 RunnableMethod<net::SocketStream, int (net::SocketStream::*)(int), Tuple1<net::Error> >::Run() base/task.h:277 #16 MessageLoop::RunTask(Task*) base/message_loop.cc:316 #17 MessageLoop::DeferOrRunPendingTask(MessageLoop::PendingTask const&) base/message_loop.cc:324 #18 MessageLoop::DoWork() base/message_loop.cc:443 #19 base::MessagePumpLibevent::Run(base::MessagePump::Delegate*) base/message_pump_libevent.cc:228 }}} INFO: T0 is program's main thread WARNING: Possible data race during read of size 4 at 0x100D7DE0: {{{ T1 (locks held: {}): #0 base::subtle::RefCountedBase::Release() base/ref_counted.cc:43 #1 base::RefCounted<net::LoadLog>::Release() base/ref_counted.h:87 #2 scoped_refptr<net::LoadLog>::~scoped_refptr() base/ref_counted.h:204 #3 net::HostResolverImpl::Request::~Request() net/base/host_resolver_impl.cc:56 #4 void STLDeleteContainerPointers<__gnu_cxx::__normal_iterator<net::HostResolverImpl::Request**, std::vector<net::HostResolverImpl::Request*, std::allocator<net::HostResolverImpl::Request*> > > >(__gnu_cxx::__normal_iterator<net::HostResolverImpl::Reques» #5 void STLDeleteElements<std::vector<net::HostResolverImpl::Request*, std::allocator<net::HostResolverImpl::Request*> > >(std::vector<net::HostResolverImpl::Request*, std::allocator<net::HostResolverImpl::Request*> >*) base/stl_util-inl.h:236 #6 net::HostResolverImpl::Job::~Job() net/base/host_resolver_impl.cc:155 #7 base::RefCountedThreadSafe<net::HostResolverImpl::Job>::Release() base/ref_counted.h:115 #8 RunnableMethodTraits<net::HostResolverImpl::Job>::ReleaseCallee(net::HostResolverImpl::Job*) base/task.h:228 #9 RunnableMethod<net::HostResolverImpl::Job, void (net::HostResolverImpl::Job::*)(), Tuple0>::ReleaseCallee() base/task.h:287 #10 RunnableMethod<net::HostResolverImpl::Job, void (net::HostResolverImpl::Job::*)(), Tuple0>::~RunnableMethod() base/task.h:272 #11 (anonymous namespace)::WorkerThread::ThreadMain() base/worker_pool_linux.cc:80 #12 ThreadFunc(void*) base/platform_thread_posix.cc:26 #13 ThreadSanitizerStartThread /home/timurrrr/valgrind-patches/valgrind-10880/tsan/ts_valgrind_intercepts.c:504 Concurrent write(s) happened at (OR AFTER) these points: T0 (locks held: {}): #0 base::subtle::RefCountedBase::Release() base/ref_counted.cc:43 #1 base::RefCounted<net::LoadLog>::Release() base/ref_counted.h:87 #2 scoped_refptr<net::LoadLog>::~scoped_refptr() base/ref_counted.h:204 #3 URLRequest::~URLRequest() net/url_request/url_request.cc:156 #4 scoped_ptr<URLRequest>::reset(URLRequest*) base/scoped_ptr.h:81 #5 net::ProxyScriptFetcherImpl::ResetCurRequestState() net/proxy/proxy_script_fetcher.cc:321 #6 net::ProxyScriptFetcherImpl::FetchCompleted() net/proxy/proxy_script_fetcher.cc:315 #7 net::ProxyScriptFetcherImpl::OnResponseCompleted(URLRequest*) net/proxy/proxy_script_fetcher.cc:288 #8 net::ProxyScriptFetcherImpl::OnResponseStarted(URLRequest*) net/proxy/proxy_script_fetcher.cc:232 #9 URLRequest::ResponseStarted() net/url_request/url_request.cc:470 Location 0x100D7DE0 is 0 bytes inside a block starting at 0x100D7DE0 of size 48 allocated by T0 from heap: #0 malloc /home/timurrrr/valgrind-patches/valgrind-10880/tsan/ts_valgrind_intercepts.c:318 #1 operator new(unsigned long) /usr/lib/libstdc++.so.6.0.9 #2 URLRequest::URLRequest(GURL const&, URLRequest::Delegate*) net/url_request/url_request.cc:141 #3 net::ProxyScriptFetcherImpl::Fetch(GURL const&, std::string*, CallbackRunner<Tuple1<int> >*) net/proxy/proxy_script_fetcher.cc:175 #4 net::ProxyScriptFetcherTest_NoCache_Test::TestBody() net/proxy/proxy_script_fetcher_unittest.cc:206 #5 testing::Test::Run() testing/gtest/src/gtest.cc:2069 }}}
Hmm, that looks like a bug in the socket stream unittest -- from that callstack they are not deleting the HostResolver from the right thread. I will take a look at that those new tests. On Wed, Oct 21, 2009 at 3:58 AM, <timurrrr@chromium.org> wrote: > Cancelling my LGTM, some race on LoadLog is still present! > > Here is one of the report (I use slightly more agressive ThreadSanitizer > mode > than the one on the buildbots) > > INFO: T1 has been created by T0 at this point: {{{ > =A0 =A0#0 =A0clone /lib/libc-2.7.so > =A0 =A0#1 =A0pthread_create@@GLIBC_2.2.5 /lib/libpthread-2.7.so > =A0 =A0#2 =A0pthread_create@* > /home/timurrrr/valgrind-patches/valgrind-10880/tsan/ts_valgrind_intercept= s.c:556 > =A0 =A0#3 =A0(anonymous namespace)::CreateThread(unsigned long, bool, > PlatformThread::Delegate*, unsigned long*) base/platform_thread_posix.cc:= 93 > =A0 =A0#4 =A0PlatformThread::CreateNonJoinable(unsigned long, > PlatformThread::Delegate*) base/platform_thread_posix.cc:113 > =A0 =A0#5 =A0base::LinuxDynamicThreadPool::PostTask(Task*) > base/worker_pool_linux.cc:142 > =A0 =A0#6 =A0(anonymous > namespace)::WorkerPoolImpl::PostTask(tracked_objects::Location const&, > Task*, > bool) base/worker_pool_linux.cc:46 > =A0 =A0#7 =A0WorkerPool::PostTask(tracked_objects::Location const&, Task*= , bool) > base/worker_pool_linux.cc:91 > =A0 =A0#8 =A0net::HostResolverImpl::Job::Start() > net/base/host_resolver_impl.cc:168 > =A0 =A0#9 =A0net::HostResolverImpl::Resolve(net::HostResolver::RequestInf= o const&, > net::AddressList*, CallbackRunner<Tuple1<int> >*, void**, net::LoadLog*) > net/base/host_resolver_impl.cc:378 > =A0 =A0#10 net::MockHostResolverBase::Resolve(net::HostResolver::RequestI= nfo > const&, net::AddressList*, CallbackRunner<Tuple1<int> >*, void**, > net::LoadLog*) > net/base/mock_host_resolver.cc:54 > =A0 =A0#11 > net::SingleRequestHostResolver::Resolve(net::HostResolver::RequestInfo > const&, net::AddressList*, CallbackRunner<Tuple1<int> >*, net::LoadLog*) > net/base/host_resolver.cc:41 > =A0 =A0#12 net::SocketStream::DoResolveHost() > net/socket_stream/socket_stream.cc:372 > =A0 =A0#13 net::SocketStream::DoLoop(int) net/socket_stream/socket_stream= .cc:271 > =A0 =A0#14 void DispatchToMethod<net::SocketStream, int > (net::SocketStream::*)(int), net::Error>(net::SocketStream*, int > (net::SocketStream::*)(int), Tuple1<net::Error> const&) base/tuple.h:422 > =A0 =A0#15 RunnableMethod<net::SocketStream, int (net::SocketStream::*)(i= nt), > Tuple1<net::Error> >::Run() base/task.h:277 > =A0 =A0#16 MessageLoop::RunTask(Task*) base/message_loop.cc:316 > =A0 =A0#17 MessageLoop::DeferOrRunPendingTask(MessageLoop::PendingTask co= nst&) > base/message_loop.cc:324 > =A0 =A0#18 MessageLoop::DoWork() base/message_loop.cc:443 > =A0 =A0#19 base::MessagePumpLibevent::Run(base::MessagePump::Delegate*) > base/message_pump_libevent.cc:228 > }}} > INFO: T0 is program's main thread > WARNING: Possible data race during read of size 4 at 0x100D7DE0: {{{ > =A0 T1 (locks held: {}): > =A0 =A0#0 =A0base::subtle::RefCountedBase::Release() base/ref_counted.cc:= 43 > =A0 =A0#1 =A0base::RefCounted<net::LoadLog>::Release() base/ref_counted.h= :87 > =A0 =A0#2 =A0scoped_refptr<net::LoadLog>::~scoped_refptr() base/ref_count= ed.h:204 > =A0 =A0#3 =A0net::HostResolverImpl::Request::~Request() > net/base/host_resolver_impl.cc:56 > =A0 =A0#4 =A0void > STLDeleteContainerPointers<__gnu_cxx::__normal_iterator<net::HostResolver= Impl::Request**, > std::vector<net::HostResolverImpl::Request*, > std::allocator<net::HostResolverImpl::Request*> > > >> >> (__gnu_cxx::__normal_iterator<net::HostResolverImpl::Reques=BB > > =A0 =A0#5 =A0void STLDeleteElements<std::vector<net::HostResolverImpl::Re= quest*, > std::allocator<net::HostResolverImpl::Request*> > >> >> (std::vector<net::HostResolverImpl::Request*, > > std::allocator<net::HostResolverImpl::Request*> >*) base/stl_util-inl.h:2= 36 > =A0 =A0#6 =A0net::HostResolverImpl::Job::~Job() net/base/host_resolver_im= pl.cc:155 > =A0 =A0#7 =A0base::RefCountedThreadSafe<net::HostResolverImpl::Job>::Rele= ase() > base/ref_counted.h:115 > =A0 =A0#8 > RunnableMethodTraits<net::HostResolverImpl::Job>::ReleaseCallee(net::Host= ResolverImpl::Job*) > base/task.h:228 > =A0 =A0#9 =A0RunnableMethod<net::HostResolverImpl::Job, void > (net::HostResolverImpl::Job::*)(), Tuple0>::ReleaseCallee() base/task.h:2= 87 > =A0 =A0#10 RunnableMethod<net::HostResolverImpl::Job, void > (net::HostResolverImpl::Job::*)(), Tuple0>::~RunnableMethod() > base/task.h:272 > =A0 =A0#11 (anonymous namespace)::WorkerThread::ThreadMain() > base/worker_pool_linux.cc:80 > =A0 =A0#12 ThreadFunc(void*) base/platform_thread_posix.cc:26 > =A0 =A0#13 ThreadSanitizerStartThread > /home/timurrrr/valgrind-patches/valgrind-10880/tsan/ts_valgrind_intercept= s.c:504 > =A0Concurrent write(s) happened at (OR AFTER) these points: > =A0 T0 (locks held: {}): > =A0 =A0#0 =A0base::subtle::RefCountedBase::Release() base/ref_counted.cc:= 43 > =A0 =A0#1 =A0base::RefCounted<net::LoadLog>::Release() base/ref_counted.h= :87 > =A0 =A0#2 =A0scoped_refptr<net::LoadLog>::~scoped_refptr() base/ref_count= ed.h:204 > =A0 =A0#3 =A0URLRequest::~URLRequest() net/url_request/url_request.cc:156 > =A0 =A0#4 =A0scoped_ptr<URLRequest>::reset(URLRequest*) base/scoped_ptr.h= :81 > =A0 =A0#5 =A0net::ProxyScriptFetcherImpl::ResetCurRequestState() > net/proxy/proxy_script_fetcher.cc:321 > =A0 =A0#6 =A0net::ProxyScriptFetcherImpl::FetchCompleted() > net/proxy/proxy_script_fetcher.cc:315 > =A0 =A0#7 =A0net::ProxyScriptFetcherImpl::OnResponseCompleted(URLRequest*= ) > net/proxy/proxy_script_fetcher.cc:288 > =A0 =A0#8 =A0net::ProxyScriptFetcherImpl::OnResponseStarted(URLRequest*) > net/proxy/proxy_script_fetcher.cc:232 > =A0 =A0#9 =A0URLRequest::ResponseStarted() net/url_request/url_request.cc= :470 > =A0Location 0x100D7DE0 is 0 bytes inside a block starting at 0x100D7DE0 o= f > size > 48 allocated by T0 from heap: > =A0 =A0#0 =A0malloc > /home/timurrrr/valgrind-patches/valgrind-10880/tsan/ts_valgrind_intercept= s.c:318 > =A0 =A0#1 =A0operator new(unsigned long) /usr/lib/libstdc++.so.6.0.9 > =A0 =A0#2 =A0URLRequest::URLRequest(GURL const&, URLRequest::Delegate*) > net/url_request/url_request.cc:141 > =A0 =A0#3 =A0net::ProxyScriptFetcherImpl::Fetch(GURL const&, std::string*= , > CallbackRunner<Tuple1<int> >*) net/proxy/proxy_script_fetcher.cc:175 > =A0 =A0#4 =A0net::ProxyScriptFetcherTest_NoCache_Test::TestBody() > net/proxy/proxy_script_fetcher_unittest.cc:206 > =A0 =A0#5 =A0testing::Test::Run() testing/gtest/src/gtest.cc:2069 > }}} > > > http://codereview.chromium.org/293042 >
Actually scratch my comment, dunno what this is. looking... On Wed, Oct 21, 2009 at 10:03 AM, Eric Roman <eroman@chromium.org> wrote: > Hmm, that looks like a bug in the socket stream unittest -- from that > callstack they are not deleting the HostResolver from the right > thread. > > I will take a look at that those new tests. > > On Wed, Oct 21, 2009 at 3:58 AM, =A0<timurrrr@chromium.org> wrote: >> Cancelling my LGTM, some race on LoadLog is still present! >> >> Here is one of the report (I use slightly more agressive ThreadSanitizer >> mode >> than the one on the buildbots) >> >> INFO: T1 has been created by T0 at this point: {{{ >> =A0 =A0#0 =A0clone /lib/libc-2.7.so >> =A0 =A0#1 =A0pthread_create@@GLIBC_2.2.5 /lib/libpthread-2.7.so >> =A0 =A0#2 =A0pthread_create@* >> /home/timurrrr/valgrind-patches/valgrind-10880/tsan/ts_valgrind_intercep= ts.c:556 >> =A0 =A0#3 =A0(anonymous namespace)::CreateThread(unsigned long, bool, >> PlatformThread::Delegate*, unsigned long*) base/platform_thread_posix.cc= :93 >> =A0 =A0#4 =A0PlatformThread::CreateNonJoinable(unsigned long, >> PlatformThread::Delegate*) base/platform_thread_posix.cc:113 >> =A0 =A0#5 =A0base::LinuxDynamicThreadPool::PostTask(Task*) >> base/worker_pool_linux.cc:142 >> =A0 =A0#6 =A0(anonymous >> namespace)::WorkerPoolImpl::PostTask(tracked_objects::Location const&, >> Task*, >> bool) base/worker_pool_linux.cc:46 >> =A0 =A0#7 =A0WorkerPool::PostTask(tracked_objects::Location const&, Task= *, bool) >> base/worker_pool_linux.cc:91 >> =A0 =A0#8 =A0net::HostResolverImpl::Job::Start() >> net/base/host_resolver_impl.cc:168 >> =A0 =A0#9 =A0net::HostResolverImpl::Resolve(net::HostResolver::RequestIn= fo const&, >> net::AddressList*, CallbackRunner<Tuple1<int> >*, void**, net::LoadLog*) >> net/base/host_resolver_impl.cc:378 >> =A0 =A0#10 net::MockHostResolverBase::Resolve(net::HostResolver::Request= Info >> const&, net::AddressList*, CallbackRunner<Tuple1<int> >*, void**, >> net::LoadLog*) >> net/base/mock_host_resolver.cc:54 >> =A0 =A0#11 >> net::SingleRequestHostResolver::Resolve(net::HostResolver::RequestInfo >> const&, net::AddressList*, CallbackRunner<Tuple1<int> >*, net::LoadLog*) >> net/base/host_resolver.cc:41 >> =A0 =A0#12 net::SocketStream::DoResolveHost() >> net/socket_stream/socket_stream.cc:372 >> =A0 =A0#13 net::SocketStream::DoLoop(int) net/socket_stream/socket_strea= m.cc:271 >> =A0 =A0#14 void DispatchToMethod<net::SocketStream, int >> (net::SocketStream::*)(int), net::Error>(net::SocketStream*, int >> (net::SocketStream::*)(int), Tuple1<net::Error> const&) base/tuple.h:422 >> =A0 =A0#15 RunnableMethod<net::SocketStream, int (net::SocketStream::*)(= int), >> Tuple1<net::Error> >::Run() base/task.h:277 >> =A0 =A0#16 MessageLoop::RunTask(Task*) base/message_loop.cc:316 >> =A0 =A0#17 MessageLoop::DeferOrRunPendingTask(MessageLoop::PendingTask c= onst&) >> base/message_loop.cc:324 >> =A0 =A0#18 MessageLoop::DoWork() base/message_loop.cc:443 >> =A0 =A0#19 base::MessagePumpLibevent::Run(base::MessagePump::Delegate*) >> base/message_pump_libevent.cc:228 >> }}} >> INFO: T0 is program's main thread >> WARNING: Possible data race during read of size 4 at 0x100D7DE0: {{{ >> =A0 T1 (locks held: {}): >> =A0 =A0#0 =A0base::subtle::RefCountedBase::Release() base/ref_counted.cc= :43 >> =A0 =A0#1 =A0base::RefCounted<net::LoadLog>::Release() base/ref_counted.= h:87 >> =A0 =A0#2 =A0scoped_refptr<net::LoadLog>::~scoped_refptr() base/ref_coun= ted.h:204 >> =A0 =A0#3 =A0net::HostResolverImpl::Request::~Request() >> net/base/host_resolver_impl.cc:56 >> =A0 =A0#4 =A0void >> STLDeleteContainerPointers<__gnu_cxx::__normal_iterator<net::HostResolve= rImpl::Request**, >> std::vector<net::HostResolverImpl::Request*, >> std::allocator<net::HostResolverImpl::Request*> > > >>> >>> (__gnu_cxx::__normal_iterator<net::HostResolverImpl::Reques=BB >> >> =A0 =A0#5 =A0void STLDeleteElements<std::vector<net::HostResolverImpl::R= equest*, >> std::allocator<net::HostResolverImpl::Request*> > >>> >>> (std::vector<net::HostResolverImpl::Request*, >> >> std::allocator<net::HostResolverImpl::Request*> >*) base/stl_util-inl.h:= 236 >> =A0 =A0#6 =A0net::HostResolverImpl::Job::~Job() net/base/host_resolver_i= mpl.cc:155 >> =A0 =A0#7 =A0base::RefCountedThreadSafe<net::HostResolverImpl::Job>::Rel= ease() >> base/ref_counted.h:115 >> =A0 =A0#8 >> RunnableMethodTraits<net::HostResolverImpl::Job>::ReleaseCallee(net::Hos= tResolverImpl::Job*) >> base/task.h:228 >> =A0 =A0#9 =A0RunnableMethod<net::HostResolverImpl::Job, void >> (net::HostResolverImpl::Job::*)(), Tuple0>::ReleaseCallee() base/task.h:= 287 >> =A0 =A0#10 RunnableMethod<net::HostResolverImpl::Job, void >> (net::HostResolverImpl::Job::*)(), Tuple0>::~RunnableMethod() >> base/task.h:272 >> =A0 =A0#11 (anonymous namespace)::WorkerThread::ThreadMain() >> base/worker_pool_linux.cc:80 >> =A0 =A0#12 ThreadFunc(void*) base/platform_thread_posix.cc:26 >> =A0 =A0#13 ThreadSanitizerStartThread >> /home/timurrrr/valgrind-patches/valgrind-10880/tsan/ts_valgrind_intercep= ts.c:504 >> =A0Concurrent write(s) happened at (OR AFTER) these points: >> =A0 T0 (locks held: {}): >> =A0 =A0#0 =A0base::subtle::RefCountedBase::Release() base/ref_counted.cc= :43 >> =A0 =A0#1 =A0base::RefCounted<net::LoadLog>::Release() base/ref_counted.= h:87 >> =A0 =A0#2 =A0scoped_refptr<net::LoadLog>::~scoped_refptr() base/ref_coun= ted.h:204 >> =A0 =A0#3 =A0URLRequest::~URLRequest() net/url_request/url_request.cc:15= 6 >> =A0 =A0#4 =A0scoped_ptr<URLRequest>::reset(URLRequest*) base/scoped_ptr.= h:81 >> =A0 =A0#5 =A0net::ProxyScriptFetcherImpl::ResetCurRequestState() >> net/proxy/proxy_script_fetcher.cc:321 >> =A0 =A0#6 =A0net::ProxyScriptFetcherImpl::FetchCompleted() >> net/proxy/proxy_script_fetcher.cc:315 >> =A0 =A0#7 =A0net::ProxyScriptFetcherImpl::OnResponseCompleted(URLRequest= *) >> net/proxy/proxy_script_fetcher.cc:288 >> =A0 =A0#8 =A0net::ProxyScriptFetcherImpl::OnResponseStarted(URLRequest*) >> net/proxy/proxy_script_fetcher.cc:232 >> =A0 =A0#9 =A0URLRequest::ResponseStarted() net/url_request/url_request.c= c:470 >> =A0Location 0x100D7DE0 is 0 bytes inside a block starting at 0x100D7DE0 = of >> size >> 48 allocated by T0 from heap: >> =A0 =A0#0 =A0malloc >> /home/timurrrr/valgrind-patches/valgrind-10880/tsan/ts_valgrind_intercep= ts.c:318 >> =A0 =A0#1 =A0operator new(unsigned long) /usr/lib/libstdc++.so.6.0.9 >> =A0 =A0#2 =A0URLRequest::URLRequest(GURL const&, URLRequest::Delegate*) >> net/url_request/url_request.cc:141 >> =A0 =A0#3 =A0net::ProxyScriptFetcherImpl::Fetch(GURL const&, std::string= *, >> CallbackRunner<Tuple1<int> >*) net/proxy/proxy_script_fetcher.cc:175 >> =A0 =A0#4 =A0net::ProxyScriptFetcherTest_NoCache_Test::TestBody() >> net/proxy/proxy_script_fetcher_unittest.cc:206 >> =A0 =A0#5 =A0testing::Test::Run() testing/gtest/src/gtest.cc:2069 >> }}} >> >> >> http://codereview.chromium.org/293042 >> >
I don't see this report anymore. Do you know whether it was fixed?
On Tue, Oct 27, 2009 at 8:30 AM, <timurrrr@chromium.org> wrote: > I don't see this report anymore. > Do you know whether it was fixed? Yeah, so I believe I stopped the bug from reproducing when I committed: http://codereview.chromium.org/320003 I still don't understand the exact circumstances of the race, other than it had to do with with the specific setup in this unit-test. The unit-test was using TestServer::SendQuit(), which I found does weird stuff (it spins up another IO thread, so we end up with multiple IO threads). I changed the test not to use that anymore. Anyway, I can probably go ahead and commit the suppression removal. As follow-up I would still like to better understand how this setup caused the race, since it doesn't look like the two threads were sharing states. The particular race is specific to the unit-test setup, which ends up using two IO threads (the network objects are all non thread safe). Thanks for your patience. > > http://codereview.chromium.org/293042 >
I've taken a look at http://codereview.chromium.org/320003/diff/1/2 It looks like ThreadSanitizer didn't catch the synchronization using base::WaitForSingleProcess which was called by WaitForFinish. I'll take a closer look, it may be a bug in ThreadSanitizer! Go ahead and commit this changelist On 2009/10/27 19:14:33, eroman wrote: > On Tue, Oct 27, 2009 at 8:30 AM, <mailto:timurrrr@chromium.org> wrote: > > I don't see this report anymore. > > Do you know whether it was fixed? > > Yeah, so I believe I stopped the bug from reproducing when I committed: > > http://codereview.chromium.org/320003 > > I still don't understand the exact circumstances of the race, other > than it had to do with with the specific setup in this unit-test. The > unit-test was using TestServer::SendQuit(), which I found does weird > stuff (it spins up another IO thread, so we end up with multiple IO > threads). I changed the test not to use that anymore. > > Anyway, I can probably go ahead and commit the suppression removal. > > As follow-up I would still like to better understand how this setup > caused the race, since it doesn't look like the two threads were > sharing states. > > The particular race is specific to the unit-test setup, which ends up > using two IO threads (the network objects are all non thread safe). > > Thanks for your patience. > > > > > http://codereview.chromium.org/293042 > > |