Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(7)

Side by Side Diff: src/core/SkScan_AAAPath.cpp

Issue 2393643002: Resubmit issue 2221103002 to fix the iOS build by declaring the flag in (Closed)
Patch Set: Created 4 years, 2 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
« no previous file with comments | « src/core/SkScan.h ('k') | src/core/SkScan_AntiPath.cpp » ('j') | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
(Empty)
1 /*
2 * Copyright 2016 The Android Open Source Project
3 *
4 * Use of this source code is governed by a BSD-style license that can be
5 * found in the LICENSE file.
6 */
7
8 #include "SkAntiRun.h"
9 #include "SkBlitter.h"
10 #include "SkEdge.h"
11 #include "SkAnalyticEdge.h"
12 #include "SkEdgeBuilder.h"
13 #include "SkGeometry.h"
14 #include "SkPath.h"
15 #include "SkQuadClipper.h"
16 #include "SkRasterClip.h"
17 #include "SkRegion.h"
18 #include "SkScan.h"
19 #include "SkScanPriv.h"
20 #include "SkTemplates.h"
21 #include "SkTSort.h"
22 #include "SkUtils.h"
23
24 ///////////////////////////////////////////////////////////////////////////////
25
26 /*
27
28 The following is a high-level overview of our analytic anti-aliasing
29 algorithm. We consider a path as a collection of line segments, as
30 quadratic/cubic curves are converted to small line segments. Without loss of
31 generality, let's assume that the draw region is [0, W] x [0, H].
32
33 Our algorithm is based on horizontal scan lines (y = c_i) as the previous
34 sampling-based algorithm did. However, our algorithm uses non-equal-spaced
35 scan lines, while the previous method always uses equal-spaced scan lines,
36 such as (y = 1/2 + 0, 1/2 + 1, 1/2 + 2, ...) in the previous non-AA algorithm,
37 and (y = 1/8 + 1/4, 1/8 + 2/4, 1/8 + 3/4, ...) in the previous
38 16-supersampling AA algorithm.
39
40 Our algorithm contains scan lines y = c_i for c_i that is either:
41
42 1. an integer between [0, H]
43
44 2. the y value of a line segment endpoint
45
46 3. the y value of an intersection of two line segments
47
48 For two consecutive scan lines y = c_i, y = c_{i+1}, we analytically computes
49 the coverage of this horizontal strip of our path on each pixel. This can be
50 done very efficiently because the strip of our path now only consists of
51 trapezoids whose top and bottom edges are y = c_i, y = c_{i+1} (this includes
52 rectangles and triangles as special cases).
53
54 We now describe how the coverage of single pixel is computed against such a
55 trapezoid. That coverage is essentially the intersection area of a rectangle
56 (e.g., [0, 1] x [c_i, c_{i+1}]) and our trapezoid. However, that intersection
57 could be complicated, as shown in the example region A below:
58
59 +-----------\----+
60 | \ C|
61 | \ |
62 \ \ |
63 |\ A \|
64 | \ \
65 | \ |
66 | B \ |
67 +----\-----------+
68
69 However, we don't have to compute the area of A directly. Instead, we can
70 compute the excluded area, which are B and C, quite easily, because they're
71 just triangles. In fact, we can prove that an excluded region (take B as an
72 example) is either itself a simple trapezoid (including rectangles, triangles,
73 and empty regions), or its opposite (the opposite of B is A + C) is a simple
74 trapezoid. In any case, we can compute its area efficiently.
75
76 In summary, our algorithm has a higher quality because it generates ground-
77 truth coverages analytically. It is also faster because it has much fewer
78 unnessasary horizontal scan lines. For example, given a triangle path, the
79 number of scan lines in our algorithm is only about 3 + H while the
80 16-supersampling algorithm has about 4H scan lines.
81
82 */
83
84 ///////////////////////////////////////////////////////////////////////////////
85
86 inline void addAlpha(SkAlpha& alpha, SkAlpha delta) {
87 SkASSERT(alpha + (int)delta <= 0xFF);
88 alpha += delta;
89 }
90
91 class AdditiveBlitter : public SkBlitter {
92 public:
93 virtual ~AdditiveBlitter() {}
94
95 virtual SkBlitter* getRealBlitter(bool forceRealBlitter = false) = 0;
96
97 virtual void blitAntiH(int x, int y, const SkAlpha antialias[], int len) = 0 ;
98 virtual void blitAntiH(int x, int y, const SkAlpha alpha) = 0;
99 virtual void blitAntiH(int x, int y, int width, const SkAlpha alpha) = 0;
100
101 void blitAntiH(int x, int y, const SkAlpha antialias[], const int16_t runs[] ) override {
102 SkDEBUGFAIL("Please call real blitter's blitAntiH instead.");
103 }
104
105 void blitV(int x, int y, int height, SkAlpha alpha) override {
106 SkDEBUGFAIL("Please call real blitter's blitV instead.");
107 }
108
109 void blitH(int x, int y, int width) override {
110 SkDEBUGFAIL("Please call real blitter's blitH instead.");
111 }
112
113 void blitRect(int x, int y, int width, int height) override {
114 SkDEBUGFAIL("Please call real blitter's blitRect instead.");
115 }
116
117 void blitAntiRect(int x, int y, int width, int height,
118 SkAlpha leftAlpha, SkAlpha rightAlpha) override {
119 SkDEBUGFAIL("Please call real blitter's blitAntiRect instead.");
120 }
121
122 virtual int getWidth() = 0;
123 };
124
125 // We need this mask blitter because it significantly accelerates small path fil ling.
126 class MaskAdditiveBlitter : public AdditiveBlitter {
127 public:
128 MaskAdditiveBlitter(SkBlitter* realBlitter, const SkIRect& ir, const SkRegio n& clip,
129 bool isInverse);
130 ~MaskAdditiveBlitter() {
131 fRealBlitter->blitMask(fMask, fClipRect);
132 }
133
134 // Most of the time, we still consider this mask blitter as the real blitter
135 // so we can accelerate blitRect and others. But sometimes we want to return
136 // the absolute real blitter (e.g., when we fall back to the old code path).
137 SkBlitter* getRealBlitter(bool forceRealBlitter) override {
138 return forceRealBlitter ? fRealBlitter : this;
139 }
140
141 // Virtual function is slow. So don't use this. Directly add alpha to the ma sk instead.
142 void blitAntiH(int x, int y, const SkAlpha antialias[], int len) override;
143
144 // Allowing following methods are used to blit rectangles during aaa_walk_co nvex_edges
145 // Since there aren't many rectangles, we can still break the slow speed of virtual functions.
146 void blitAntiH(int x, int y, const SkAlpha alpha) override;
147 void blitAntiH(int x, int y, int width, const SkAlpha alpha) override;
148 void blitV(int x, int y, int height, SkAlpha alpha) override;
149 void blitRect(int x, int y, int width, int height) override;
150 void blitAntiRect(int x, int y, int width, int height,
151 SkAlpha leftAlpha, SkAlpha rightAlpha) override;
152
153 int getWidth() override { return fClipRect.width(); }
154
155 static bool canHandleRect(const SkIRect& bounds) {
156 int width = bounds.width();
157 int64_t rb = SkAlign4(width);
158 // use 64bits to detect overflow
159 int64_t storage = rb * bounds.height();
160
161 return (width <= MaskAdditiveBlitter::kMAX_WIDTH) &&
162 (storage <= MaskAdditiveBlitter::kMAX_STORAGE);
163 }
164
165 // Return a pointer where pointer[x] corresonds to the alpha of (x, y)
166 inline uint8_t* getRow(int y) {
167 if (y != fY) {
168 fY = y;
169 fRow = fMask.fImage + (y - fMask.fBounds.fTop) * fMask.fRowBytes - f Mask.fBounds.fLeft;
170 }
171 return fRow;
172 }
173
174 private:
175 // so we don't try to do very wide things, where the RLE blitter would be fa ster
176 static const int kMAX_WIDTH = 32;
177 static const int kMAX_STORAGE = 1024;
178
179 SkBlitter* fRealBlitter;
180 SkMask fMask;
181 SkIRect fClipRect;
182 // we add 2 because we can write 1 extra byte at either end due to precision error
183 uint32_t fStorage[(kMAX_STORAGE >> 2) + 2];
184
185 uint8_t* fRow;
186 int fY;
187 };
188
189 MaskAdditiveBlitter::MaskAdditiveBlitter(SkBlitter* realBlitter, const SkIRect& ir, const SkRegion& clip,
190 bool isInverse) {
191 SkASSERT(canHandleRect(ir));
192 SkASSERT(!isInverse);
193
194 fRealBlitter = realBlitter;
195
196 fMask.fImage = (uint8_t*)fStorage + 1; // There's 1 extra byte at either end of fStorage
197 fMask.fBounds = ir;
198 fMask.fRowBytes = ir.width();
199 fMask.fFormat = SkMask::kA8_Format;
200
201 fY = ir.fTop - 1;
202 fRow = nullptr;
203
204 fClipRect = ir;
205 if (!fClipRect.intersect(clip.getBounds())) {
206 SkASSERT(0);
207 fClipRect.setEmpty();
208 }
209
210 memset(fStorage, 0, fMask.fBounds.height() * fMask.fRowBytes + 2);
211 }
212
213 void MaskAdditiveBlitter::blitAntiH(int x, int y, const SkAlpha antialias[], int len) {
214 SkFAIL("Don't use this; directly add alphas to the mask.");
215 }
216
217 void MaskAdditiveBlitter::blitAntiH(int x, int y, const SkAlpha alpha) {
218 SkASSERT(x >= fMask.fBounds.fLeft -1);
219 addAlpha(this->getRow(y)[x], alpha);
220 }
221
222 void MaskAdditiveBlitter::blitAntiH(int x, int y, int width, const SkAlpha alpha ) {
223 SkASSERT(x >= fMask.fBounds.fLeft -1);
224 uint8_t* row = this->getRow(y);
225 for (int i=0; i<width; i++) {
226 addAlpha(row[x + i], alpha);
227 }
228 }
229
230 void MaskAdditiveBlitter::blitV(int x, int y, int height, SkAlpha alpha) {
231 if (alpha == 0) {
232 return;
233 }
234 SkASSERT(x >= fMask.fBounds.fLeft -1);
235 // This must be called as if this is a real blitter.
236 // So we directly set alpha rather than adding it.
237 uint8_t* row = this->getRow(y);
238 for (int i=0; i<height; i++) {
239 row[x] = alpha;
240 row += fMask.fRowBytes;
241 }
242 }
243
244 void MaskAdditiveBlitter::blitRect(int x, int y, int width, int height) {
245 SkASSERT(x >= fMask.fBounds.fLeft -1);
246 // This must be called as if this is a real blitter.
247 // So we directly set alpha rather than adding it.
248 uint8_t* row = this->getRow(y);
249 for (int i=0; i<height; i++) {
250 memset(row + x, 0xFF, width);
251 row += fMask.fRowBytes;
252 }
253 }
254
255 void MaskAdditiveBlitter::blitAntiRect(int x, int y, int width, int height,
256 SkAlpha leftAlpha, SkAlpha rightAlpha) {
257 blitV(x, y, height, leftAlpha);
258 blitV(x + 1 + width, y, height, rightAlpha);
259 blitRect(x + 1, y, width, height);
260 }
261
262 class RunBasedAdditiveBlitter : public AdditiveBlitter {
263 public:
264 RunBasedAdditiveBlitter(SkBlitter* realBlitter, const SkIRect& ir, const SkR egion& clip,
265 bool isInverse);
266 ~RunBasedAdditiveBlitter();
267
268 SkBlitter* getRealBlitter(bool forceRealBlitter) override;
269
270 void blitAntiH(int x, int y, const SkAlpha antialias[], int len) override;
271 void blitAntiH(int x, int y, const SkAlpha alpha) override;
272 void blitAntiH(int x, int y, int width, const SkAlpha alpha) override;
273
274 int getWidth() override;
275
276 private:
277 SkBlitter* fRealBlitter;
278
279 /// Current y coordinate
280 int fCurrY;
281 /// Widest row of region to be blitted
282 int fWidth;
283 /// Leftmost x coordinate in any row
284 int fLeft;
285 /// Initial y coordinate (top of bounds).
286 int fTop;
287
288 // The next three variables are used to track a circular buffer that
289 // contains the values used in SkAlphaRuns. These variables should only
290 // ever be updated in advanceRuns(), and fRuns should always point to
291 // a valid SkAlphaRuns...
292 int fRunsToBuffer;
293 void* fRunsBuffer;
294 int fCurrentRun;
295 SkAlphaRuns fRuns;
296
297 int fOffsetX;
298
299 inline bool check(int x, int width) {
300 #ifdef SK_DEBUG
301 if (x < 0 || x + width > fWidth) {
302 SkDebugf("Ignore x = %d, width = %d\n", x, width);
303 }
304 #endif
305 return (x >= 0 && x + width <= fWidth);
306 }
307
308 // extra one to store the zero at the end
309 inline int getRunsSz() const { return (fWidth + 1 + (fWidth + 2)/2) * sizeof (int16_t); }
310
311 // This function updates the fRuns variable to point to the next buffer spac e
312 // with adequate storage for a SkAlphaRuns. It mostly just advances fCurrent Run
313 // and resets fRuns to point to an empty scanline.
314 inline void advanceRuns() {
315 const size_t kRunsSz = this->getRunsSz();
316 fCurrentRun = (fCurrentRun + 1) % fRunsToBuffer;
317 fRuns.fRuns = reinterpret_cast<int16_t*>(
318 reinterpret_cast<uint8_t*>(fRunsBuffer) + fCurrentRun * kRunsSz);
319 fRuns.fAlpha = reinterpret_cast<SkAlpha*>(fRuns.fRuns + fWidth + 1);
320 fRuns.reset(fWidth);
321 }
322
323 // Blitting 0xFF and 0 is much faster so we snap alphas close to them
324 inline SkAlpha snapAlpha(SkAlpha alpha) {
325 return alpha > 247 ? 0xFF : alpha < 8 ? 0 : alpha;
326 }
327
328 inline void flush() {
329 if (fCurrY >= fTop) {
330 SkASSERT(fCurrentRun < fRunsToBuffer);
331 for (int x = 0; fRuns.fRuns[x]; x += fRuns.fRuns[x]) {
332 // It seems that blitting 255 or 0 is much faster than blitting 254 or 1
333 fRuns.fAlpha[x] = snapAlpha(fRuns.fAlpha[x]);
334 }
335 if (!fRuns.empty()) {
336 // SkDEBUGCODE(fRuns.dump();)
337 fRealBlitter->blitAntiH(fLeft, fCurrY, fRuns.fAlpha, fRuns.fRuns );
338 this->advanceRuns();
339 fOffsetX = 0;
340 }
341 fCurrY = fTop - 1;
342 }
343 }
344
345 inline void checkY(int y) {
346 if (y != fCurrY) {
347 this->flush();
348 fCurrY = y;
349 }
350 }
351 };
352
353 RunBasedAdditiveBlitter::RunBasedAdditiveBlitter(SkBlitter* realBlitter, const S kIRect& ir, const SkRegion& clip,
354 bool isInverse) {
355 fRealBlitter = realBlitter;
356
357 SkIRect sectBounds;
358 if (isInverse) {
359 // We use the clip bounds instead of the ir, since we may be asked to
360 //draw outside of the rect when we're a inverse filltype
361 sectBounds = clip.getBounds();
362 } else {
363 if (!sectBounds.intersect(ir, clip.getBounds())) {
364 sectBounds.setEmpty();
365 }
366 }
367
368 const int left = sectBounds.left();
369 const int right = sectBounds.right();
370
371 fLeft = left;
372 fWidth = right - left;
373 fTop = sectBounds.top();
374 fCurrY = fTop - 1;
375
376 fRunsToBuffer = realBlitter->requestRowsPreserved();
377 fRunsBuffer = realBlitter->allocBlitMemory(fRunsToBuffer * this->getRunsSz() );
378 fCurrentRun = -1;
379
380 this->advanceRuns();
381
382 fOffsetX = 0;
383 }
384
385 RunBasedAdditiveBlitter::~RunBasedAdditiveBlitter() {
386 this->flush();
387 }
388
389 SkBlitter* RunBasedAdditiveBlitter::getRealBlitter(bool forceRealBlitter) {
390 return fRealBlitter;
391 }
392
393 void RunBasedAdditiveBlitter::blitAntiH(int x, int y, const SkAlpha antialias[], int len) {
394 checkY(y);
395 x -= fLeft;
396
397 if (x < 0) {
398 len += x;
399 antialias -= x;
400 x = 0;
401 }
402 len = SkTMin(len, fWidth - x);
403 SkASSERT(check(x, len));
404
405 if (x < fOffsetX) {
406 fOffsetX = 0;
407 }
408
409 fOffsetX = fRuns.add(x, 0, len, 0, 0, fOffsetX); // Break the run
410 for (int i = 0; i < len; i += fRuns.fRuns[x + i]) {
411 for (int j = 1; j < fRuns.fRuns[x + i]; j++) {
412 fRuns.fRuns[x + i + j] = 1;
413 fRuns.fAlpha[x + i + j] = fRuns.fAlpha[x + i];
414 }
415 fRuns.fRuns[x + i] = 1;
416 }
417 for (int i=0; i<len; i++) {
418 addAlpha(fRuns.fAlpha[x + i], antialias[i]);
419 }
420 }
421 void RunBasedAdditiveBlitter::blitAntiH(int x, int y, const SkAlpha alpha) {
422 checkY(y);
423 x -= fLeft;
424
425 if (x < fOffsetX) {
426 fOffsetX = 0;
427 }
428
429 if (this->check(x, 1)) {
430 fOffsetX = fRuns.add(x, 0, 1, 0, alpha, fOffsetX);
431 }
432 }
433
434 void RunBasedAdditiveBlitter::blitAntiH(int x, int y, int width, const SkAlpha a lpha) {
435 checkY(y);
436 x -= fLeft;
437
438 if (x < fOffsetX) {
439 fOffsetX = 0;
440 }
441
442 if (this->check(x, width)) {
443 fOffsetX = fRuns.add(x, 0, width, 0, alpha, fOffsetX);
444 }
445 }
446
447 int RunBasedAdditiveBlitter::getWidth() { return fWidth; }
448
449 ///////////////////////////////////////////////////////////////////////////////
450
451 // Return the alpha of a trapezoid whose height is 1
452 static inline SkAlpha trapezoidToAlpha(SkFixed l1, SkFixed l2) {
453 SkASSERT(l1 >= 0 && l2 >= 0);
454 return ((l1 + l2) >> 9);
455 }
456
457 // The alpha of right-triangle (a, a*b), in 16 bits
458 static inline SkFixed partialTriangleToAlpha16(SkFixed a, SkFixed b) {
459 SkASSERT(a <= SK_Fixed1);
460 // SkFixedMul_lowprec(SkFixedMul_lowprec(a, a), b) >> 1
461 // return ((((a >> 8) * (a >> 8)) >> 8) * (b >> 8)) >> 1;
462 return (a >> 11) * (a >> 11) * (b >> 11);
463 }
464
465 // The alpha of right-triangle (a, a*b)
466 static inline SkAlpha partialTriangleToAlpha(SkFixed a, SkFixed b) {
467 return partialTriangleToAlpha16(a, b) >> 8;
468 }
469
470 static inline SkAlpha getPartialAlpha(SkAlpha alpha, SkFixed partialHeight) {
471 return (alpha * partialHeight) >> 16;
472 }
473
474 static inline SkAlpha getPartialAlpha(SkAlpha alpha, SkAlpha fullAlpha) {
475 return ((uint16_t)alpha * fullAlpha) >> 8;
476 }
477
478 // For SkFixed that's close to SK_Fixed1, we can't convert it to alpha by just s hifting right.
479 // For example, when f = SK_Fixed1, right shifting 8 will get 256, but we need 2 55.
480 // This is rarely the problem so we'll only use this for blitting rectangles.
481 static inline SkAlpha f2a(SkFixed f) {
482 SkASSERT(f <= SK_Fixed1);
483 return getPartialAlpha(0xFF, f);
484 }
485
486 // Suppose that line (l1, y)-(r1, y+1) intersects with (l2, y)-(r2, y+1),
487 // approximate (very coarsely) the x coordinate of the intersection.
488 static inline SkFixed approximateIntersection(SkFixed l1, SkFixed r1, SkFixed l2 , SkFixed r2) {
489 if (l1 > r1) { SkTSwap(l1, r1); }
490 if (l2 > r2) { SkTSwap(l2, r2); }
491 return (SkTMax(l1, l2) + SkTMin(r1, r2)) >> 1;
492 }
493
494 // Here we always send in l < SK_Fixed1, and the first alpha we want to compute is alphas[0]
495 static inline void computeAlphaAboveLine(SkAlpha* alphas, SkFixed l, SkFixed r,
496 SkFixed dY, SkAlpha fullAlpha) {
497 SkASSERT(l <= r);
498 SkASSERT(l >> 16 == 0);
499 int R = SkFixedCeilToInt(r);
500 if (R == 0) {
501 return;
502 } else if (R == 1) {
503 alphas[0] = getPartialAlpha(((R << 17) - l - r) >> 9, fullAlpha);
504 } else {
505 SkFixed first = SK_Fixed1 - l; // horizontal edge length of the left-mos t triangle
506 SkFixed last = r - ((R - 1) << 16); // horizontal edge length of the rig ht-most triangle
507 SkFixed firstH = SkFixedMul_lowprec(first, dY); // vertical edge of the left-most triangle
508 alphas[0] = SkFixedMul_lowprec(first, firstH) >> 9; // triangle alpha
509 SkFixed alpha16 = firstH + (dY >> 1); // rectangle plus triangle
510 for (int i = 1; i < R - 1; i++) {
511 alphas[i] = alpha16 >> 8;
512 alpha16 += dY;
513 }
514 alphas[R - 1] = fullAlpha - partialTriangleToAlpha(last, dY);
515 }
516 }
517
518 // Here we always send in l < SK_Fixed1, and the first alpha we want to compute is alphas[0]
519 static inline void computeAlphaBelowLine(SkAlpha* alphas, SkFixed l, SkFixed r, SkFixed dY, SkAlpha fullAlpha) {
520 SkASSERT(l <= r);
521 SkASSERT(l >> 16 == 0);
522 int R = SkFixedCeilToInt(r);
523 if (R == 0) {
524 return;
525 } else if (R == 1) {
526 alphas[0] = getPartialAlpha(trapezoidToAlpha(l, r), fullAlpha);
527 } else {
528 SkFixed first = SK_Fixed1 - l; // horizontal edge length of the left-mos t triangle
529 SkFixed last = r - ((R - 1) << 16); // horizontal edge length of the rig ht-most triangle
530 SkFixed lastH = SkFixedMul_lowprec(last, dY); // vertical edge of the ri ght-most triangle
531 alphas[R-1] = SkFixedMul_lowprec(last, lastH) >> 9; // triangle alpha
532 SkFixed alpha16 = lastH + (dY >> 1); // rectangle plus triangle
533 for (int i = R - 2; i > 0; i--) {
534 alphas[i] = alpha16 >> 8;
535 alpha16 += dY;
536 }
537 alphas[0] = fullAlpha - partialTriangleToAlpha(first, dY);
538 }
539 }
540
541 // Note that if fullAlpha != 0xFF, we'll multiply alpha by fullAlpha
542 static inline void blit_single_alpha(AdditiveBlitter* blitter, int y, int x,
543 SkAlpha alpha, SkAlpha fullAlpha, SkAlpha* maskRow ,
544 bool isUsingMask) {
545 if (isUsingMask) {
546 if (fullAlpha == 0xFF) {
547 maskRow[x] = alpha;
548 } else {
549 addAlpha(maskRow[x], getPartialAlpha(alpha, fullAlpha));
550 }
551 } else {
552 if (fullAlpha == 0xFF) {
553 blitter->getRealBlitter()->blitV(x, y, 1, alpha);
554 } else {
555 blitter->blitAntiH(x, y, getPartialAlpha(alpha, fullAlpha));
556 }
557 }
558 }
559
560 static inline void blit_two_alphas(AdditiveBlitter* blitter, int y, int x,
561 SkAlpha a1, SkAlpha a2, SkAlpha fullAlpha, SkAlpha* maskRow,
562 bool isUsingMask) {
563 if (isUsingMask) {
564 addAlpha(maskRow[x], a1);
565 addAlpha(maskRow[x + 1], a2);
566 } else {
567 if (fullAlpha == 0xFF) {
568 blitter->getRealBlitter()->blitV(x, y, 1, a1);
569 blitter->getRealBlitter()->blitV(x + 1, y, 1, a2);
570 } else {
571 blitter->blitAntiH(x, y, a1);
572 blitter->blitAntiH(x + 1, y, a2);
573 }
574 }
575 }
576
577 // It's important that this is inline. Otherwise it'll be much slower.
578 static SK_ALWAYS_INLINE void blit_full_alpha(AdditiveBlitter* blitter, int y, in t x, int len,
579 SkAlpha fullAlpha, SkAlpha* maskRow, bool isUsingMas k) {
580 if (isUsingMask) {
581 for (int i=0; i<len; i++) {
582 addAlpha(maskRow[x + i], fullAlpha);
583 }
584 } else {
585 if (fullAlpha == 0xFF) {
586 blitter->getRealBlitter()->blitH(x, y, len);
587 } else {
588 blitter->blitAntiH(x, y, len, fullAlpha);
589 }
590 }
591 }
592
593 static void blit_aaa_trapezoid_row(AdditiveBlitter* blitter, int y,
594 SkFixed ul, SkFixed ur, SkFixed ll, SkFixed l r,
595 SkFixed lDY, SkFixed rDY, SkAlpha fullAlpha, SkAlpha* maskRow,
596 bool isUsingMask) {
597 int L = SkFixedFloorToInt(ul), R = SkFixedCeilToInt(lr);
598 int len = R - L;
599
600 if (len == 1) {
601 SkAlpha alpha = trapezoidToAlpha(ur - ul, lr - ll);
602 blit_single_alpha(blitter, y, L, alpha, fullAlpha, maskRow, isUsingMask) ;
603 return;
604 }
605
606 // SkDebugf("y = %d, len = %d, ul = %f, ur = %f, ll = %f, lr = %f\n", y, len ,
607 // SkFixedToFloat(ul), SkFixedToFloat(ur), SkFixedToFloat(ll), SkFix edToFloat(lr));
608
609 const int kQuickLen = 31;
610 // This is faster than SkAutoSMalloc<1024>
611 char quickMemory[(sizeof(SkAlpha) * 2 + sizeof(int16_t)) * (kQuickLen + 1)];
612 SkAlpha* alphas;
613
614 if (len <= kQuickLen) {
615 alphas = (SkAlpha*)quickMemory;
616 } else {
617 alphas = new SkAlpha[(len + 1) * (sizeof(SkAlpha) * 2 + sizeof(int16_t)) ];
618 }
619
620 SkAlpha* tempAlphas = alphas + len + 1;
621 int16_t* runs = (int16_t*)(alphas + (len + 1) * 2);
622
623 for (int i = 0; i < len; i++) {
624 runs[i] = 1;
625 alphas[i] = fullAlpha;
626 }
627 runs[len] = 0;
628
629 int uL = SkFixedFloorToInt(ul);
630 int lL = SkFixedCeilToInt(ll);
631 if (uL + 2 == lL) { // We only need to compute two triangles, accelerate thi s special case
632 SkFixed first = (uL << 16) + SK_Fixed1 - ul;
633 SkFixed second = ll - ul - first;
634 SkAlpha a1 = fullAlpha - partialTriangleToAlpha(first, lDY);
635 SkAlpha a2 = partialTriangleToAlpha(second, lDY);
636 alphas[0] = alphas[0] > a1 ? alphas[0] - a1 : 0;
637 alphas[1] = alphas[1] > a2 ? alphas[1] - a2 : 0;
638 } else {
639 computeAlphaBelowLine(tempAlphas + uL - L, ul - (uL << 16), ll - (uL << 16),
640 lDY, fullAlpha);
641 for (int i = uL; i < lL; i++) {
642 if (alphas[i - L] > tempAlphas[i - L]) {
643 alphas[i - L] -= tempAlphas[i - L];
644 } else {
645 alphas[i - L] = 0;
646 }
647 }
648 }
649
650 int uR = SkFixedFloorToInt(ur);
651 int lR = SkFixedCeilToInt(lr);
652 if (uR + 2 == lR) { // We only need to compute two triangles, accelerate thi s special case
653 SkFixed first = (uR << 16) + SK_Fixed1 - ur;
654 SkFixed second = lr - ur - first;
655 SkAlpha a1 = partialTriangleToAlpha(first, rDY);
656 SkAlpha a2 = fullAlpha - partialTriangleToAlpha(second, rDY);
657 alphas[len-2] = alphas[len-2] > a1 ? alphas[len-2] - a1 : 0;
658 alphas[len-1] = alphas[len-1] > a2 ? alphas[len-1] - a2 : 0;
659 } else {
660 computeAlphaAboveLine(tempAlphas + uR - L, ur - (uR << 16), lr - (uR << 16),
661 rDY, fullAlpha);
662 for (int i = uR; i < lR; i++) {
663 if (alphas[i - L] > tempAlphas[i - L]) {
664 alphas[i - L] -= tempAlphas[i - L];
665 } else {
666 alphas[i - L] = 0;
667 }
668 }
669 }
670
671 if (isUsingMask) {
672 for (int i=0; i<len; i++) {
673 addAlpha(maskRow[L + i], alphas[i]);
674 }
675 } else {
676 if (fullAlpha == 0xFF) { // Real blitter is faster than RunBasedAdditive Blitter
677 blitter->getRealBlitter()->blitAntiH(L, y, alphas, runs);
678 } else {
679 blitter->blitAntiH(L, y, alphas, len);
680 }
681 }
682
683 if (len > kQuickLen) {
684 delete [] alphas;
685 }
686 }
687
688 static inline void blit_trapezoid_row(AdditiveBlitter* blitter, int y,
689 SkFixed ul, SkFixed ur, SkFixed ll, SkFixed lr,
690 SkFixed lDY, SkFixed rDY, SkAlpha fullAlpha,
691 SkAlpha* maskRow, bool isUsingMask) {
692 SkASSERT(lDY >= 0 && rDY >= 0); // We should only send in the absolte value
693
694 if (ul > ur) {
695 #ifdef SK_DEBUG
696 SkDebugf("ul = %f > ur = %f!\n", SkFixedToFloat(ul), SkFixedToFloat(ur)) ;
697 #endif
698 return;
699 }
700
701 // Edge crosses. Approximate it. This should only happend due to precision l imit,
702 // so the approximation could be very coarse.
703 if (ll > lr) {
704 #ifdef SK_DEBUG
705 SkDebugf("approximate intersection: %d %f %f\n", y,
706 SkFixedToFloat(ll), SkFixedToFloat(lr));
707 #endif
708 ll = lr = approximateIntersection(ul, ll, ur, lr);
709 }
710
711 if (ul == ur && ll == lr) {
712 return; // empty trapzoid
713 }
714
715 // We're going to use the left line ul-ll and the rite line ur-lr
716 // to exclude the area that's not covered by the path.
717 // Swapping (ul, ll) or (ur, lr) won't affect that exclusion
718 // so we'll do that for simplicity.
719 if (ul > ll) { SkTSwap(ul, ll); }
720 if (ur > lr) { SkTSwap(ur, lr); }
721
722 SkFixed joinLeft = SkFixedCeilToFixed(ll);
723 SkFixed joinRite = SkFixedFloorToFixed(ur);
724 if (joinLeft <= joinRite) { // There's a rect from joinLeft to joinRite that we can blit
725 if (joinLeft < joinRite) {
726 blit_full_alpha(blitter, y, joinLeft >> 16, (joinRite - joinLeft) >> 16, fullAlpha,
727 maskRow, isUsingMask);
728 }
729 if (ul < joinLeft) {
730 int len = SkFixedCeilToInt(joinLeft - ul);
731 if (len == 1) {
732 SkAlpha alpha = trapezoidToAlpha(joinLeft - ul, joinLeft - ll);
733 blit_single_alpha(blitter, y, ul >> 16, alpha, fullAlpha, maskRo w, isUsingMask);
734 } else if (len == 2) {
735 SkFixed first = joinLeft - SK_Fixed1 - ul;
736 SkFixed second = ll - ul - first;
737 SkAlpha a1 = partialTriangleToAlpha(first, lDY);
738 SkAlpha a2 = fullAlpha - partialTriangleToAlpha(second, lDY);
739 blit_two_alphas(blitter, y, ul >> 16, a1, a2, fullAlpha, maskRow , isUsingMask);
740 } else {
741 blit_aaa_trapezoid_row(blitter, y, ul, joinLeft, ll, joinLeft, l DY, SK_MaxS32,
742 fullAlpha, maskRow, isUsingMask);
743 }
744 }
745 if (lr > joinRite) {
746 int len = SkFixedCeilToInt(lr - joinRite);
747 if (len == 1) {
748 SkAlpha alpha = trapezoidToAlpha(ur - joinRite, lr - joinRite);
749 blit_single_alpha(blitter, y, joinRite >> 16, alpha, fullAlpha, maskRow,
750 isUsingMask);
751 } else if (len == 2) {
752 SkFixed first = joinRite + SK_Fixed1 - ur;
753 SkFixed second = lr - ur - first;
754 SkAlpha a1 = fullAlpha - partialTriangleToAlpha(first, rDY);
755 SkAlpha a2 = partialTriangleToAlpha(second, rDY);
756 blit_two_alphas(blitter, y, joinRite >> 16, a1, a2, fullAlpha, m askRow,
757 isUsingMask);
758 } else {
759 blit_aaa_trapezoid_row(blitter, y, joinRite, ur, joinRite, lr, S K_MaxS32, rDY,
760 fullAlpha, maskRow, isUsingMask);
761 }
762 }
763 } else {
764 blit_aaa_trapezoid_row(blitter, y, ul, ur, ll, lr, lDY, rDY, fullAlpha, maskRow,
765 isUsingMask);
766 }
767 }
768
769 ///////////////////////////////////////////////////////////////////////////////
770
771 static bool operator<(const SkAnalyticEdge& a, const SkAnalyticEdge& b) {
772 int valuea = a.fUpperY;
773 int valueb = b.fUpperY;
774
775 if (valuea == valueb) {
776 valuea = a.fX;
777 valueb = b.fX;
778 }
779
780 if (valuea == valueb) {
781 valuea = a.fDX;
782 valueb = b.fDX;
783 }
784
785 return valuea < valueb;
786 }
787
788 static SkAnalyticEdge* sort_edges(SkAnalyticEdge* list[], int count, SkAnalyticE dge** last) {
789 SkTQSort(list, list + count - 1);
790
791 // now make the edges linked in sorted order
792 for (int i = 1; i < count; i++) {
793 list[i - 1]->fNext = list[i];
794 list[i]->fPrev = list[i - 1];
795 }
796
797 *last = list[count - 1];
798 return list[0];
799 }
800
801 #ifdef SK_DEBUG
802 static void validate_sort(const SkAnalyticEdge* edge) {
803 SkFixed y = SkIntToFixed(-32768);
804
805 while (edge->fUpperY != SK_MaxS32) {
806 edge->validate();
807 SkASSERT(y <= edge->fUpperY);
808
809 y = edge->fUpperY;
810 edge = (SkAnalyticEdge*)edge->fNext;
811 }
812 }
813 #else
814 #define validate_sort(edge)
815 #endif
816
817 // return true if we're done with this edge
818 static bool update_edge(SkAnalyticEdge* edge, SkFixed last_y) {
819 if (last_y >= edge->fLowerY) {
820 if (edge->fCurveCount < 0) {
821 if (static_cast<SkAnalyticCubicEdge*>(edge)->updateCubic()) {
822 return false;
823 }
824 } else if (edge->fCurveCount > 0) {
825 if (static_cast<SkAnalyticQuadraticEdge*>(edge)->updateQuadratic()) {
826 return false;
827 }
828 }
829 return true;
830 }
831 SkASSERT(false);
832 return false;
833 }
834
835 // For an edge, we consider it smooth if the Dx doesn't change much, and Dy is l arge enough
836 // For curves that are updating, the Dx is not changing much if fQDx/fCDx and fQ Dy/fCDy are
837 // relatively large compared to fQDDx/QCDDx and fQDDy/fCDDy
838 static inline bool isSmoothEnough(SkAnalyticEdge* thisEdge, SkAnalyticEdge* next Edge, int stop_y) {
839 if (thisEdge->fCurveCount < 0) {
840 const SkCubicEdge& cEdge = static_cast<SkAnalyticCubicEdge*>(thisEdge)-> fCEdge;
841 int ddshift = cEdge.fCurveShift;
842 return SkAbs32(cEdge.fCDx) >> 1 >= SkAbs32(cEdge.fCDDx) >> ddshift &&
843 SkAbs32(cEdge.fCDy) >> 1 >= SkAbs32(cEdge.fCDDy) >> ddshift &&
844 // current Dy is (fCDy - (fCDDy >> ddshift)) >> dshift
845 (cEdge.fCDy - (cEdge.fCDDy >> ddshift)) >> cEdge.fCubicDShift >= SK_Fixed1;
846 } else if (thisEdge->fCurveCount > 0) {
847 const SkQuadraticEdge& qEdge = static_cast<SkAnalyticQuadraticEdge*>(thi sEdge)->fQEdge;
848 return SkAbs32(qEdge.fQDx) >> 1 >= SkAbs32(qEdge.fQDDx) &&
849 SkAbs32(qEdge.fQDy) >> 1 >= SkAbs32(qEdge.fQDDy) &&
850 // current Dy is (fQDy - fQDDy) >> shift
851 (qEdge.fQDy - qEdge.fQDDy) >> qEdge.fCurveShift
852 >= SK_Fixed1;
853 }
854 return SkAbs32(nextEdge->fDX - thisEdge->fDX) <= SK_Fixed1 && // DDx should be small
855 nextEdge->fLowerY - nextEdge->fUpperY >= SK_Fixed1; // Dy should be large
856 }
857
858 // Check if the leftE and riteE are changing smoothly in terms of fDX.
859 // If yes, we can later skip the fractional y and directly jump to integer y.
860 static inline bool isSmoothEnough(SkAnalyticEdge* leftE, SkAnalyticEdge* riteE,
861 SkAnalyticEdge* currE, int stop_y) {
862 if (currE->fUpperY >= stop_y << 16) {
863 return false; // We're at the end so we won't skip anything
864 }
865 if (leftE->fLowerY + SK_Fixed1 < riteE->fLowerY) {
866 return isSmoothEnough(leftE, currE, stop_y); // Only leftE is changing
867 } else if (leftE->fLowerY > riteE->fLowerY + SK_Fixed1) {
868 return isSmoothEnough(riteE, currE, stop_y); // Only riteE is changing
869 }
870
871 // Now both edges are changing, find the second next edge
872 SkAnalyticEdge* nextCurrE = currE->fNext;
873 if (nextCurrE->fUpperY >= stop_y << 16) { // Check if we're at the end
874 return false;
875 }
876 if (*nextCurrE < *currE) {
877 SkTSwap(currE, nextCurrE);
878 }
879 return isSmoothEnough(leftE, currE, stop_y) && isSmoothEnough(riteE, nextCur rE, stop_y);
880 }
881
882 static inline void aaa_walk_convex_edges(SkAnalyticEdge* prevHead, AdditiveBlitt er* blitter,
883 int start_y, int stop_y, SkFixed leftBound, SkFixed r iteBound,
884 bool isUsingMask) {
885 validate_sort((SkAnalyticEdge*)prevHead->fNext);
886
887 SkAnalyticEdge* leftE = (SkAnalyticEdge*) prevHead->fNext;
888 SkAnalyticEdge* riteE = (SkAnalyticEdge*) leftE->fNext;
889 SkAnalyticEdge* currE = (SkAnalyticEdge*) riteE->fNext;
890
891 SkFixed y = SkTMax(leftE->fUpperY, riteE->fUpperY);
892
893 #ifdef SK_DEBUG
894 int frac_y_cnt = 0;
895 int total_y_cnt = 0;
896 #endif
897
898 for (;;) {
899 // We have to check fLowerY first because some edges might be alone (e.g ., there's only
900 // a left edge but no right edge in a given y scan line) due to precisio n limit.
901 while (leftE->fLowerY <= y) { // Due to smooth jump, we may pass multipl e short edges
902 if (update_edge(leftE, y)) {
903 if (SkFixedFloorToInt(currE->fUpperY) >= stop_y) {
904 goto END_WALK;
905 }
906 leftE = currE;
907 currE = (SkAnalyticEdge*)currE->fNext;
908 }
909 }
910 while (riteE->fLowerY <= y) { // Due to smooth jump, we may pass multipl e short edges
911 if (update_edge(riteE, y)) {
912 if (SkFixedFloorToInt(currE->fUpperY) >= stop_y) {
913 goto END_WALK;
914 }
915 riteE = currE;
916 currE = (SkAnalyticEdge*)currE->fNext;
917 }
918 }
919
920 SkASSERT(leftE);
921 SkASSERT(riteE);
922
923 // check our bottom clip
924 if (SkFixedFloorToInt(y) >= stop_y) {
925 break;
926 }
927
928 SkASSERT(SkFixedFloorToInt(leftE->fUpperY) <= stop_y);
929 SkASSERT(SkFixedFloorToInt(riteE->fUpperY) <= stop_y);
930
931 leftE->goY(y);
932 riteE->goY(y);
933
934 if (leftE->fX > riteE->fX || (leftE->fX == riteE->fX &&
935 leftE->fDX > riteE->fDX)) {
936 SkTSwap(leftE, riteE);
937 }
938
939 SkFixed local_bot_fixed = SkMin32(leftE->fLowerY, riteE->fLowerY);
940 // Skip the fractional y if edges are changing smoothly
941 if (isSmoothEnough(leftE, riteE, currE, stop_y)) {
942 local_bot_fixed = SkFixedCeilToFixed(local_bot_fixed);
943 }
944 local_bot_fixed = SkMin32(local_bot_fixed, SkIntToFixed(stop_y + 1));
945
946 SkFixed left = leftE->fX;
947 SkFixed dLeft = leftE->fDX;
948 SkFixed rite = riteE->fX;
949 SkFixed dRite = riteE->fDX;
950 if (0 == (dLeft | dRite)) {
951 int fullLeft = SkFixedCeilToInt(left);
952 int fullRite = SkFixedFloorToInt(rite);
953 SkFixed partialLeft = SkIntToFixed(fullLeft) - left;
954 SkFixed partialRite = rite - SkIntToFixed(fullRite);
955 int fullTop = SkFixedCeilToInt(y);
956 int fullBot = SkFixedFloorToInt(local_bot_fixed);
957 SkFixed partialTop = SkIntToFixed(fullTop) - y;
958 SkFixed partialBot = local_bot_fixed - SkIntToFixed(fullBot);
959 if (fullTop > fullBot) { // The rectangle is within one pixel height ...
960 partialTop -= (SK_Fixed1 - partialBot);
961 partialBot = 0;
962 }
963
964 if (fullRite >= fullLeft) {
965 // Blit all full-height rows from fullTop to fullBot
966 if (fullBot > fullTop) {
967 blitter->getRealBlitter()->blitAntiRect(fullLeft - 1, fullTo p,
968 fullRite - fullLeft, fullBot - fullTop,
969 f2a(partialLeft), f2 a(partialRite));
970 }
971
972 if (partialTop > 0) { // blit first partial row
973 if (partialLeft > 0) {
974 blitter->blitAntiH(fullLeft - 1, fullTop - 1,
975 f2a(SkFixedMul_lowprec(partialTop, partialLeft)) );
976 }
977 if (partialRite > 0) {
978 blitter->blitAntiH(fullRite, fullTop - 1,
979 f2a(SkFixedMul_lowprec(partialTop, partialRite)) );
980 }
981 blitter->blitAntiH(fullLeft, fullTop - 1, fullRite - fullLef t,
982 f2a(partialTop));
983 }
984
985 if (partialBot > 0) { // blit last partial row
986 if (partialLeft > 0) {
987 blitter->blitAntiH(fullLeft - 1, fullBot,
988 f2a(SkFixedMul_lowprec(partialBot, pa rtialLeft)));
989 }
990 if (partialRite > 0) {
991 blitter->blitAntiH(fullRite, fullBot,
992 f2a(SkFixedMul_lowprec(partialBot, pa rtialRite)));
993 }
994 blitter->blitAntiH(fullLeft, fullBot, fullRite - fullLeft, f 2a(partialBot));
995 }
996 } else { // left and rite are within the same pixel
997 if (partialTop > 0) {
998 blitter->getRealBlitter()->blitV(fullLeft - 1, fullTop - 1, 1,
999 f2a(SkFixedMul_lowprec(partialTop, rite - left)));
1000 }
1001 if (partialBot > 0) {
1002 blitter->getRealBlitter()->blitV(fullLeft - 1, fullBot, 1,
1003 f2a(SkFixedMul_lowprec(partialBot, rite - left)));
1004 }
1005 if (fullBot >= fullTop) {
1006 blitter->getRealBlitter()->blitV(fullLeft - 1, fullTop, full Bot - fullTop,
1007 f2a(rite - left));
1008 }
1009 }
1010
1011 y = local_bot_fixed;
1012 } else {
1013 // The following constant are used to snap X
1014 // We snap X mainly for speedup (no tiny triangle) and
1015 // avoiding edge cases caused by precision errors
1016 const SkFixed kSnapDigit = SK_Fixed1 >> 4;
1017 const SkFixed kSnapHalf = kSnapDigit >> 1;
1018 const SkFixed kSnapMask = (-1 ^ (kSnapDigit - 1));
1019 left += kSnapHalf; rite += kSnapHalf; // For fast rounding
1020
1021 // Number of blit_trapezoid_row calls we'll have
1022 int count = SkFixedCeilToInt(local_bot_fixed) - SkFixedFloorToInt(y) ;
1023 #ifdef SK_DEBUG
1024 total_y_cnt += count;
1025 frac_y_cnt += ((int)(y & 0xFFFF0000) != y);
1026 if ((int)(y & 0xFFFF0000) != y) {
1027 SkDebugf("frac_y = %f\n", SkFixedToFloat(y));
1028 }
1029 #endif
1030
1031 // If we're using mask blitter, we advance the mask row in this func tion
1032 // to save some "if" condition checks.
1033 SkAlpha* maskRow = nullptr;
1034 if (isUsingMask) {
1035 maskRow = static_cast<MaskAdditiveBlitter*>(blitter)->getRow(y > > 16);
1036 }
1037
1038 // Instead of writing one loop that handles both partial-row blit_tr apezoid_row
1039 // and full-row trapezoid_row together, we use the following 3-stage flow to
1040 // handle partial-row blit and full-row blit separately. It will sav e us much time
1041 // on changing y, left, and rite.
1042 if (count > 1) {
1043 if ((int)(y & 0xFFFF0000) != y) { // There's a partial-row on th e top
1044 count--;
1045 SkFixed nextY = SkFixedCeilToFixed(y + 1);
1046 SkFixed dY = nextY - y;
1047 SkFixed nextLeft = left + SkFixedMul_lowprec(dLeft, dY);
1048 SkFixed nextRite = rite + SkFixedMul_lowprec(dRite, dY);
1049 blit_trapezoid_row(blitter, y >> 16, left & kSnapMask, rite & kSnapMask,
1050 nextLeft & kSnapMask, nextRite & kSnapMask, leftE->f DY, riteE->fDY,
1051 getPartialAlpha(0xFF, dY), maskRow, isUsingMask);
1052 left = nextLeft; rite = nextRite; y = nextY;
1053 }
1054
1055 while (count > 1) { // Full rows in the middle
1056 count--;
1057 if (isUsingMask) {
1058 maskRow = static_cast<MaskAdditiveBlitter*>(blitter)->ge tRow(y >> 16);
1059 }
1060 SkFixed nextY = y + SK_Fixed1, nextLeft = left + dLeft, next Rite = rite + dRite;
1061 blit_trapezoid_row(blitter, y >> 16, left & kSnapMask, rite & kSnapMask,
1062 nextLeft & kSnapMask, nextRite & kSnapMask,
1063 leftE->fDY, riteE->fDY, 0xFF, maskRow, isUsingMask);
1064 left = nextLeft; rite = nextRite; y = nextY;
1065 }
1066 }
1067
1068 if (isUsingMask) {
1069 maskRow = static_cast<MaskAdditiveBlitter*>(blitter)->getRow(y > > 16);
1070 }
1071
1072 SkFixed dY = local_bot_fixed - y; // partial-row on the bottom
1073 SkASSERT(dY <= SK_Fixed1);
1074 // Smooth jumping to integer y may make the last nextLeft/nextRite o ut of bound.
1075 // Take them back into the bound here.
1076 SkFixed nextLeft = SkTMax(left + SkFixedMul_lowprec(dLeft, dY), left Bound);
1077 SkFixed nextRite = SkTMin(rite + SkFixedMul_lowprec(dRite, dY), rite Bound);
1078 blit_trapezoid_row(blitter, y >> 16, left & kSnapMask, rite & kSnapM ask,
1079 nextLeft & kSnapMask, nextRite & kSnapMask, leftE->fDY, rite E->fDY,
1080 getPartialAlpha(0xFF, dY), maskRow, isUsingMask);
1081 left = nextLeft; rite = nextRite; y = local_bot_fixed;
1082 left -= kSnapHalf; rite -= kSnapHalf;
1083 }
1084
1085 leftE->fX = left;
1086 riteE->fX = rite;
1087 leftE->fY = riteE->fY = y;
1088 }
1089
1090 END_WALK:
1091 ;
1092 #ifdef SK_DEBUG
1093 SkDebugf("frac_y_cnt = %d, total_y_cnt = %d\n", frac_y_cnt, total_y_cnt);
1094 #endif
1095 }
1096
1097 void SkScan::aaa_fill_path(const SkPath& path, const SkIRect* clipRect, Additive Blitter* blitter,
1098 int start_y, int stop_y, const SkRegion& clipRgn, bool isUsin gMask) {
1099 SkASSERT(blitter);
1100
1101 if (path.isInverseFillType() || !path.isConvex()) {
1102 // fall back to supersampling AA
1103 SkScan::AntiFillPath(path, clipRgn, blitter->getRealBlitter(true), false );
1104 return;
1105 }
1106
1107 SkEdgeBuilder builder;
1108
1109 // If we're convex, then we need both edges, even the right edge is past the clip
1110 const bool canCullToTheRight = !path.isConvex();
1111
1112 SkASSERT(GlobalAAConfig::getInstance().fUseAnalyticAA);
1113 int count = builder.build(path, clipRect, 0, canCullToTheRight, true);
1114 SkASSERT(count >= 0);
1115
1116 SkAnalyticEdge** list = (SkAnalyticEdge**)builder.analyticEdgeList();
1117
1118 SkIRect rect = clipRgn.getBounds();
1119 if (0 == count) {
1120 if (path.isInverseFillType()) {
1121 /*
1122 * Since we are in inverse-fill, our caller has already drawn above
1123 * our top (start_y) and will draw below our bottom (stop_y). Thus
1124 * we need to restrict our drawing to the intersection of the clip
1125 * and those two limits.
1126 */
1127 if (rect.fTop < start_y) {
1128 rect.fTop = start_y;
1129 }
1130 if (rect.fBottom > stop_y) {
1131 rect.fBottom = stop_y;
1132 }
1133 if (!rect.isEmpty()) {
1134 blitter->blitRect(rect.fLeft, rect.fTop, rect.width(), rect.heig ht());
1135 }
1136 }
1137 return;
1138 }
1139
1140 SkAnalyticEdge headEdge, tailEdge, *last;
1141 // this returns the first and last edge after they're sorted into a dlink li st
1142 SkAnalyticEdge* edge = sort_edges(list, count, &last);
1143
1144 headEdge.fPrev = nullptr;
1145 headEdge.fNext = edge;
1146 headEdge.fUpperY = headEdge.fLowerY = SK_MinS32;
1147 headEdge.fX = SK_MinS32;
1148 headEdge.fDX = 0;
1149 headEdge.fDY = SK_MaxS32;
1150 headEdge.fUpperX = SK_MinS32;
1151 edge->fPrev = &headEdge;
1152
1153 tailEdge.fPrev = last;
1154 tailEdge.fNext = nullptr;
1155 tailEdge.fUpperY = tailEdge.fLowerY = SK_MaxS32;
1156 headEdge.fX = SK_MaxS32;
1157 headEdge.fDX = 0;
1158 headEdge.fDY = SK_MaxS32;
1159 headEdge.fUpperX = SK_MaxS32;
1160 last->fNext = &tailEdge;
1161
1162 // now edge is the head of the sorted linklist
1163
1164 if (clipRect && start_y < clipRect->fTop) {
1165 start_y = clipRect->fTop;
1166 }
1167 if (clipRect && stop_y > clipRect->fBottom) {
1168 stop_y = clipRect->fBottom;
1169 }
1170
1171 if (!path.isInverseFillType() && path.isConvex()) {
1172 SkASSERT(count >= 2); // convex walker does not handle missing right e dges
1173 aaa_walk_convex_edges(&headEdge, blitter, start_y, stop_y,
1174 rect.fLeft << 16, rect.fRight << 16, isUsingMask);
1175 } else {
1176 SkFAIL("Concave AAA is not yet implemented!");
1177 }
1178 }
1179
1180 ///////////////////////////////////////////////////////////////////////////////
1181
1182 void SkScan::AAAFillPath(const SkPath& path, const SkRegion& origClip, SkBlitter * blitter) {
1183 if (origClip.isEmpty()) {
1184 return;
1185 }
1186
1187 const bool isInverse = path.isInverseFillType();
1188 SkIRect ir;
1189 path.getBounds().roundOut(&ir);
1190 if (ir.isEmpty()) {
1191 if (isInverse) {
1192 blitter->blitRegion(origClip);
1193 }
1194 return;
1195 }
1196
1197 SkIRect clippedIR;
1198 if (isInverse) {
1199 // If the path is an inverse fill, it's going to fill the entire
1200 // clip, and we care whether the entire clip exceeds our limits.
1201 clippedIR = origClip.getBounds();
1202 } else {
1203 if (!clippedIR.intersect(ir, origClip.getBounds())) {
1204 return;
1205 }
1206 }
1207
1208 // Our antialiasing can't handle a clip larger than 32767, so we restrict
1209 // the clip to that limit here. (the runs[] uses int16_t for its index).
1210 //
1211 // A more general solution (one that could also eliminate the need to
1212 // disable aa based on ir bounds (see overflows_short_shift) would be
1213 // to tile the clip/target...
1214 SkRegion tmpClipStorage;
1215 const SkRegion* clipRgn = &origClip;
1216 {
1217 static const int32_t kMaxClipCoord = 32767;
1218 const SkIRect& bounds = origClip.getBounds();
1219 if (bounds.fRight > kMaxClipCoord || bounds.fBottom > kMaxClipCoord) {
1220 SkIRect limit = { 0, 0, kMaxClipCoord, kMaxClipCoord };
1221 tmpClipStorage.op(origClip, limit, SkRegion::kIntersect_Op);
1222 clipRgn = &tmpClipStorage;
1223 }
1224 }
1225 // for here down, use clipRgn, not origClip
1226
1227 SkScanClipper clipper(blitter, clipRgn, ir);
1228 const SkIRect* clipRect = clipper.getClipRect();
1229
1230 if (clipper.getBlitter() == nullptr) { // clipped out
1231 if (isInverse) {
1232 blitter->blitRegion(*clipRgn);
1233 }
1234 return;
1235 }
1236
1237 // now use the (possibly wrapped) blitter
1238 blitter = clipper.getBlitter();
1239
1240 if (isInverse) {
1241 // Currently, we use the old path to render the inverse path,
1242 // so we don't need this.
1243 // sk_blit_above(blitter, ir, *clipRgn);
1244 }
1245
1246 SkASSERT(SkIntToScalar(ir.fTop) <= path.getBounds().fTop);
1247
1248 if (MaskAdditiveBlitter::canHandleRect(ir) && !isInverse) {
1249 MaskAdditiveBlitter additiveBlitter(blitter, ir, *clipRgn, isInverse);
1250 aaa_fill_path(path, clipRect, &additiveBlitter, ir.fTop, ir.fBottom, *cl ipRgn, true);
1251 } else {
1252 RunBasedAdditiveBlitter additiveBlitter(blitter, ir, *clipRgn, isInverse );
1253 aaa_fill_path(path, clipRect, &additiveBlitter, ir.fTop, ir.fBottom, *cl ipRgn, false);
1254 }
1255
1256 if (isInverse) {
1257 // Currently, we use the old path to render the inverse path,
1258 // so we don't need this.
1259 // sk_blit_below(blitter, ir, *clipRgn);
1260 }
1261 }
1262
1263 // This almost copies SkScan::AntiFillPath
1264 void SkScan::AAAFillPath(const SkPath& path, const SkRasterClip& clip, SkBlitter * blitter) {
1265 if (clip.isEmpty()) {
1266 return;
1267 }
1268
1269 if (clip.isBW()) {
1270 AAAFillPath(path, clip.bwRgn(), blitter);
1271 } else {
1272 SkRegion tmp;
1273 SkAAClipBlitter aaBlitter;
1274
1275 tmp.setRect(clip.getBounds());
1276 aaBlitter.init(blitter, &clip.aaRgn());
1277 AAAFillPath(path, tmp, &aaBlitter);
1278 }
1279 }
OLDNEW
« no previous file with comments | « src/core/SkScan.h ('k') | src/core/SkScan_AntiPath.cpp » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698