Chromium Code Reviews
Help | Chromium Project | Sign in
(1)

Issue 2751263002: [Bindings] Cache handles for dictionary keys on V8PerIsolateData.

Can't Edit
Can't Publish+Mail
Start Review
Created:
1 week, 1 day ago by jbroman
Modified:
3 hours, 52 minutes ago
Reviewers:
haraken, jochen
CC:
blink-reviews, blink-reviews-bindings_chromium.org, chromium-reviews, esprehn, Toon Verwaest
Target Ref:
refs/heads/master
Project:
chromium
Visibility:
Public.

Description

[Bindings] Cache handles for dictionary keys on V8PerIsolateData. This reduces APK size by about 20 KiB (because the work to make the dictionary keys exists in one place, out-of-line) and saves work looking up keys in the internalized string table. On the DOMMatrix multiply benchmark, this reduces the overhead compared to the polyfill from 2.4x to 1.6x (on my workstation). BUG=685748

Patch Set 1 #

Patch Set 2 : win: give each eternal keys helper a unique name #

Patch Set 3 : Merge branch 'master' into dict-eternal-keys #

Patch Set 4 : const #

Unified diffs Side-by-side diffs Delta from patch set Stats (+247 lines, -109 lines) Patch
M third_party/WebKit/Source/bindings/core/v8/V8PerIsolateData.h View 1 2 3 2 chunks +10 lines, -0 lines 0 comments Download
M third_party/WebKit/Source/bindings/core/v8/V8PerIsolateData.cpp View 1 2 3 2 chunks +22 lines, -0 lines 0 comments Download
M third_party/WebKit/Source/bindings/templates/dictionary_v8.cpp.tmpl View 1 2 3 3 chunks +22 lines, -4 lines 0 comments Download
M third_party/WebKit/Source/bindings/tests/results/core/V8TestDictionary.cpp View 1 2 3 40 chunks +141 lines, -92 lines 0 comments Download
M third_party/WebKit/Source/bindings/tests/results/core/V8TestDictionaryDerived.cpp View 1 2 3 6 chunks +24 lines, -9 lines 0 comments Download
M third_party/WebKit/Source/bindings/tests/results/core/V8TestInterfaceEventInit.cpp View 1 2 3 3 chunks +14 lines, -2 lines 0 comments Download
M third_party/WebKit/Source/bindings/tests/results/core/V8TestPermissiveDictionary.cpp View 1 2 3 3 chunks +14 lines, -2 lines 0 comments Download
Commit queue not available (can’t edit this change).

Messages

Total messages: 32 (17 generated)
jbroman
5 days, 22 hours ago (2017-03-18 23:22:43 UTC) #15
haraken
A couple of questions: - In terms of binary size, maybe should we consider generalizing ...
5 days, 15 hours ago (2017-03-19 06:09:55 UTC) #18
jbroman
[cc verwaest, who also suggested caching the internalized strings on the bug] On 2017/03/19 at ...
4 days, 4 hours ago (2017-03-20 17:45:08 UTC) #19
haraken
On 2017/03/20 17:45:08, jbroman wrote: > [cc verwaest, who also suggested caching the internalized strings ...
3 days, 19 hours ago (2017-03-21 02:51:13 UTC) #20
jbroman
On 2017/03/21 at 02:51:13, haraken wrote: > On 2017/03/20 17:45:08, jbroman wrote: > > [cc ...
2 days, 7 hours ago (2017-03-22 14:53:21 UTC) #21
haraken
+jochen The question is if we should cache typical V8 strings in the binding side ...
2 days, 6 hours ago (2017-03-22 15:05:26 UTC) #23
jochen
Maybe v8 and Blink can use the same hash for strings, then we can easily ...
2 days, 6 hours ago (2017-03-22 15:09:08 UTC) #24
jbroman
On 2017/03/22 at 15:09:08, jochen wrote: > Maybe v8 and Blink can use the same ...
2 days, 6 hours ago (2017-03-22 15:21:15 UTC) #25
jochen
The idea of sharing the hash is that if you already have an AtomicString or ...
1 day, 6 hours ago (2017-03-23 15:50:37 UTC) #26
jbroman
On 2017/03/23 at 15:50:37, jochen wrote: > The idea of sharing the hash is that ...
1 day, 5 hours ago (2017-03-23 16:00:47 UTC) #27
jochen
On 2017/03/23 at 16:00:47, jbroman wrote: > On 2017/03/23 at 15:50:37, jochen wrote: > > ...
12 hours, 18 minutes ago (2017-03-24 09:36:13 UTC) #28
haraken
On 2017/03/24 09:36:13, jochen wrote: > On 2017/03/23 at 16:00:47, jbroman wrote: > > On ...
9 hours, 50 minutes ago (2017-03-24 12:03:46 UTC) #29
jbroman
On 2017/03/24 at 12:03:46, haraken wrote: > On 2017/03/24 09:36:13, jochen wrote: > > On ...
8 hours, 5 minutes ago (2017-03-24 13:48:34 UTC) #30
haraken
On 2017/03/24 13:48:34, jbroman wrote: > On 2017/03/24 at 12:03:46, haraken wrote: > > On ...
5 hours, 5 minutes ago (2017-03-24 16:48:31 UTC) #31
jbroman
3 hours, 52 minutes ago (2017-03-24 18:01:44 UTC) #32
On 2017/03/24 at 16:48:31, haraken wrote:
> On 2017/03/24 13:48:34, jbroman wrote:
> > On 2017/03/24 at 12:03:46, haraken wrote:
> > > On 2017/03/24 09:36:13, jochen wrote:
> > > > On 2017/03/23 at 16:00:47, jbroman wrote:
> > > > > On 2017/03/23 at 15:50:37, jochen wrote:
> > > > > > The idea of sharing the hash is that if you already have an
AtomicString
> > or
> > > > an internalized v8::String, at least you don't need to recompute the
hash
> > before
> > > > finding the other one
> > > > > > 
> > > > > > but we should probably just measure what is faster
> > > > > 
> > > > > Understood, but in this case we start with a string literal. We could
> > maybe
> > > > get static WTF::Strings here if we wanted to, but otherwise that seems
like
> > an
> > > > orthogonal question. Am I misunderstanding?
> > > > 
> > > > yeah
> > > > 
> > > > Overall, I'm deferring to Kentaro here
> > > 
> > > Hmm.
> > > 
> > > v8AtomicString is intending to be a cache to speed up a V8 string look-up.
If
> > we have to introduce another cache for V8 strings returned by
v8AtomicString, I
> > guess we're doing something wrong.
> > > 
> > > What's the slow part of v8AtomicString?
> > 
> > In my profiles, it was about (as I recall -- I don't have the numbers in
front
> > of me) 40% hash computation, 40% "is this entry a match", 20% misc overhead.
I
> > was able to trim some off that in micro-optimization, but the proportions
> > remained similar. If we're willing to push devirtualization further into
> > StringTable than I did, I might be able to make it somewhat faster, but
probably
> > not free.
> > 
> > > If the slow part is the hash calculation, can we add a use_this_hash
parameter
> > to v8AtomicString and pass in the address of the string literal?
> > 
> > Won't that make us do the hash lookup twice? We need the string to actually
be
> > an internalized V8 string, because otherwise we'll internalize it anyway
when we
> > try to do property lookup (LookupIterator calls Factory::InternalizeName to
do
> > so). That implies it has to be in the string table keyed by the correct v8
> > string hash.
> 
> Can we solve the problem by sharing the hashing function between v8 and Blink?

Partially. As I indicated above, hashing is only part of the cost. And it's
complicated by V8's hashing function being specialized in a few key ways:
1. It's seeded (which prevents us from computing it at compile time unless we
know the seed -- but avoiding that is usually the point of dynamically seeded
hash functions).
2. It treats "array index-like" and "other" strings different. I think this is
an optimization due to the way that object properties work in V8 (where both
integers and strings can be used as object keys, so array-index-like keys are a
special kind of string that we'd like to be able to construct and hash quickly).

And in this particular case, we don't have either a V8-side or Blink-side hash
value handy (because we're starting with a static string, not a
WTF::AtomicString).

Ideally (at least from a performance standpoint) we would just index into an
array. This CL goes part-way, in that it does a one hash lookup on the Blink
side per dictionary, and then array-indexes from there.

> Basically -- I just want to understand what should happen in the ideal state.
If the ideal state is too far from the current state, I'm okay with creating a
cache on the binding side.
> 
> I'm sorry that I'm a bit stubborn here.

No worries; I understand that you want to get to the right solution here.

> I want to be careful about this change because:
> 
> - Historically we have made similar mistakes in the past. We introduced a
cache for small integers and then removed it by speeding up v8::Integer::New. We
introduced WrapperBoilerplateMap but now want to remove it by speeding up a
wrapper creation on the V8 side.

Fair enough. I don't know the history behind the v8::Integer cache, but that one
does seem somewhat simpler than this (because of V8's SMI representation being
particularly simple).

> - In general, V8 bindings is slower than JSC bindings. One of the reasons is
that V8 and Blink are implemented as separate modules with a clear API boundary.
JSC and WebKit can interact with each other in a more intrusive manner. I think
we should consider a way to get V8 and Blink closer to each other and make the
interaction faster (without losing the good API boundary).

From my perspective, this is rather similar to how V8 does the same thing
internally. The analogous thing in V8 is the isolate's internalized string
roots:
https://cs.chromium.org/chromium/src/v8/src/factory.h?type=cs&l=700

That is, each isolate contains an array of "strong roots": everlasting
strongly-owned handles to objects that live as long as the isolate. Among these
are internalized strings that V8 uses often, like "arguments", "length", and
"byteOffset". These could be cached, but it happens that they are eagerly
initialized in v8::internal::Heap::CreateInitialObjects.

So inside V8 code, I can write things like:

// compiles down to read at a constant offset from the isolate pointer
Handle<String> key = isolate->factory()->length_string();

Blink doesn't have strong roots directly in the v8::internal::Heap object, but
eternal handles are the same thing (except that they are allocated dynamically),
and kept alive in the same way (Heap::IterateStrongRoots).

From a "making them more similar" perspective, the most similar thing we could
do is have a fixed array of eternal dictionary keys directly by
V8PerIsolateData, and initialize them when the isolate boots (which avoids
having to check for initialization on use, at the cost of doing more allocations
at startup).

> > We could have a separate V8-side string table keyed by Blink (or other
embedded)
> > provided keys/hash values, but that seems functionally equivalent to a
> > Blink-side cache, and I'm not sure I understand why it would be preferable
to a
> > Blink-side cache.
Sign in to reply to this message.

Powered by Google App Engine
RSS Feeds Recent Issues | This issue
This is Rietveld d1a128a62