Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(117)

Side by Side Diff: src/core/SkScan_AAAPath.cpp

Issue 2221103002: Analytic AntiAlias for Convex Shapes (Closed) Base URL: https://skia.googlesource.com/skia.git@master
Patch Set: Fix alpha computation Created 4 years, 4 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
OLDNEW
(Empty)
1 /*
2 * Copyright 2016 The Android Open Source Project
3 *
4 * Use of this source code is governed by a BSD-style license that can be
5 * found in the LICENSE file.
6 */
7
8 #include "SkAntiRun.h"
9 #include "SkBlitter.h"
10 #include "SkEdge.h"
11 #include "SkEdgeBuilder.h"
12 #include "SkGeometry.h"
13 #include "SkPath.h"
14 #include "SkQuadClipper.h"
15 #include "SkRasterClip.h"
16 #include "SkRegion.h"
17 #include "SkScan.h"
18 #include "SkScanPriv.h"
19 #include "SkTemplates.h"
20 #include "SkTSort.h"
21 #include "SkUtils.h"
22
23 ///////////////////////////////////////////////////////////////////////////////
24
25 /*
26
27 The following is a high-level overview of our analytic anti-aliasing
28 algorithm. We consider a path as a collection of line segments, as
29 quadratic/cubic curves are converted to small line segments. Without loss of
30 generality, let's assume that the draw region is [0, W] x [0, H].
31
32 Our algorithm is based on horizontal scan lines (y = c_i) as the previous
33 sampling-based algorithm did. However, our algorithm uses non-equal-spaced
34 scan lines, while the previous method always uses equal-spaced scan lines,
35 such as (y = 1/2 + 0, 1/2 + 1, 1/2 + 2, ...) in the previous non-AA algorithm,
36 and (y = 1/8 + 1/4, 1/8 + 2/4, 1/8 + 3/4, ...) in the previous
37 16-supersampling AA algorithm.
38
39 Our algorithm contains scan lines y = c_i for c_i that is either:
40
41 1. an integer between [0, H]
42
43 2. the y value of a line segment endpoint
44
45 3. the y value of an intersection of two line segments
46
47 For two consecutive scan lines y = c_i, y = c_{i+1}, we analytically computes
48 the coverage of this horizontal strip of our path on each pixel. This can be
49 done very efficiently because the strip of our path now only consists of
50 trapezoids whose top and bottom edges are y = c_i, y = c_{i+1} (this includes
51 rectangles and triangles as special cases).
52
53 We now describe how the coverage of single pixel is computed against such a
54 trapezoid. That coverage is essentially the intersection area of a rectangle
55 (e.g., [0, 1] x [c_i, c_{i+1}]) and our trapezoid. However, that intersection
56 could be complicated, as shown in the example region A below:
57
58 +-----------\----+
59 | \ C|
60 | \ |
61 \ \ |
62 |\ A \|
63 | \ \
64 | \ |
65 | B \ |
66 +----\-----------+
67
68 However, we don't have to compute the area of A directly. Instead, we can
69 compute the excluded area, which are B and C, quite easily, because they're
70 just triangles. In fact, we can prove that an excluded region (take B as an
71 example) is either itself a simple trapezoid (including rectangles, triangles,
72 and empty regions), or its opposite (the opposite of B is A + C) is a simple
73 trapezoid. In any case, we can compute its area efficiently.
74
75 In summary, our algorithm has a higher quality because it generates ground-
76 truth coverages analytically. It is also faster because it has much fewer
77 unnessasary horizontal scan lines. For example, given a triangle path, the
78 number of scan lines in our algorithm is only about 3 + H while the
79 16-supersampling algorithm has about 4H scan lines.
80
81 */
82
83 ///////////////////////////////////////////////////////////////////////////////
84
85 inline void addAlpha(SkAlpha& alpha, SkAlpha delta) {
86 SkASSERT(alpha + (int)delta <= 0xFF);
87 alpha += delta;
88 }
89
90 class AdditiveBlitter : public SkBlitter {
91 public:
92 virtual ~AdditiveBlitter() {}
93
94 virtual SkBlitter* getRealBlitter(bool forceRealBlitter = false) = 0;
95
96 virtual void blitAntiH(int x, int y, const SkAlpha antialias[], int len) = 0 ;
97 virtual void blitAntiH(int x, int y, const SkAlpha alpha) = 0;
98 virtual void blitAntiH(int x, int y, int width, const SkAlpha alpha) = 0;
99
100 void blitAntiH(int x, int y, const SkAlpha antialias[], const int16_t runs[] ) {
101 SkDEBUGFAIL("Please call real blitter's blitAntiH instead.");
102 }
103
104 void blitV(int x, int y, int height, SkAlpha alpha) override {
105 SkDEBUGFAIL("Please call real blitter's blitV instead.");
106 }
107
108 void blitH(int x, int y, int width) override {
109 SkDEBUGFAIL("Please call real blitter's blitH instead.");
110 }
111
112 void blitRect(int x, int y, int width, int height) override {
113 SkDEBUGFAIL("Please call real blitter's blitRect instead.");
114 }
115
116 void blitAntiRect(int x, int y, int width, int height,
117 SkAlpha leftAlpha, SkAlpha rightAlpha) override {
118 SkDEBUGFAIL("Please call real blitter's blitAntiRect instead.");
119 }
120
121 virtual int getWidth() = 0;
122 };
123
124 // We need this mask blitter because it significantly accelerates small path fil ling.
125 class MaskAdditiveBlitter : public AdditiveBlitter {
126 public:
127 MaskAdditiveBlitter(SkBlitter* realBlitter, const SkIRect& ir, const SkRegio n& clip,
128 bool isInverse);
129 ~MaskAdditiveBlitter() {
130 fRealBlitter->blitMask(fMask, fClipRect);
131 }
132
133 // Most of the time, we still consider this mask blitter as the real blitter
134 // so we can accelerate blitRect and others. But sometimes we want to return
135 // the absolute real blitter (e.g., when we fall back to the old code path).
136 SkBlitter* getRealBlitter(bool forceRealBlitter) override {
137 return forceRealBlitter ? fRealBlitter : this;
138 }
139
140 // Virtual function is slow. So don't use this. Directly add alpha to the ma sk instead.
141 void blitAntiH(int x, int y, const SkAlpha antialias[], int len) override;
142
143 // Allowing following methods are used to blit rectangles during aaa_walk_co nvex_edges
144 // Since there aren't many rectangles, we can still break the slow speed of virtual functions.
145 void blitAntiH(int x, int y, const SkAlpha alpha) override;
146 void blitAntiH(int x, int y, int width, const SkAlpha alpha) override;
147 void blitV(int x, int y, int height, SkAlpha alpha) override;
148 void blitRect(int x, int y, int width, int height) override;
149 void blitAntiRect(int x, int y, int width, int height,
150 SkAlpha leftAlpha, SkAlpha rightAlpha) override;
151
152 int getWidth() override { return fClipRect.width(); }
153
154 static bool canHandleRect(const SkIRect& bounds) {
155 int width = bounds.width();
156 int64_t rb = SkAlign4(width);
157 // use 64bits to detect overflow
158 int64_t storage = rb * bounds.height();
159
160 return (width <= MaskAdditiveBlitter::kMAX_WIDTH) &&
161 (storage <= MaskAdditiveBlitter::kMAX_STORAGE);
162 }
163
164 // Return a pointer where pointer[x] corresonds to the alpha of (x, y)
165 inline uint8_t* getRow(int y) {
166 if (y != fY) {
167 fY = y;
168 fRow = fMask.fImage + (y - fMask.fBounds.fTop) * fMask.fRowBytes - f Mask.fBounds.fLeft;
169 }
170 return fRow;
171 }
172
173 private:
174 // so we don't try to do very wide things, where the RLE blitter would be fa ster
175 static const int kMAX_WIDTH = 32;
176 static const int kMAX_STORAGE = 1024;
177
178 SkBlitter* fRealBlitter;
179 SkMask fMask;
180 SkIRect fClipRect;
181 // we add 2 because we can write 1 extra byte at either end due to precision error
182 uint32_t fStorage[(kMAX_STORAGE >> 2) + 2];
183
184 uint8_t* fRow;
185 int fY;
186 };
187
188 MaskAdditiveBlitter::MaskAdditiveBlitter(SkBlitter* realBlitter, const SkIRect& ir, const SkRegion& clip,
189 bool isInverse) {
190 SkASSERT(canHandleRect(ir));
191 SkASSERT(!isInverse);
192
193 fRealBlitter = realBlitter;
194
195 fMask.fImage = (uint8_t*)fStorage + 1; // There's 1 extra byte at either end of fStorage
196 fMask.fBounds = ir;
197 fMask.fRowBytes = ir.width();
198 fMask.fFormat = SkMask::kA8_Format;
199
200 fY = ir.fTop - 1;
201 fRow = nullptr;
202
203 fClipRect = ir;
204 if (!fClipRect.intersect(clip.getBounds())) {
205 SkASSERT(0);
206 fClipRect.setEmpty();
207 }
208
209 memset(fStorage, 0, fMask.fBounds.height() * fMask.fRowBytes + 2);
210 }
211
212 void MaskAdditiveBlitter::blitAntiH(int x, int y, const SkAlpha antialias[], int len) {
213 SkFAIL("Don't use this; directly add alphas to the mask.");
214 }
215
216 void MaskAdditiveBlitter::blitAntiH(int x, int y, const SkAlpha alpha) {
217 SkASSERT(x >= fMask.fBounds.fLeft -1);
218 addAlpha(this->getRow(y)[x], alpha);
219 }
220
221 void MaskAdditiveBlitter::blitAntiH(int x, int y, int width, const SkAlpha alpha ) {
222 SkASSERT(x >= fMask.fBounds.fLeft -1);
223 uint8_t* row = this->getRow(y);
224 for (int i=0; i<width; i++) {
225 addAlpha(row[x + i], alpha);
226 }
227 }
228
229 void MaskAdditiveBlitter::blitV(int x, int y, int height, SkAlpha alpha) {
230 if (alpha == 0) {
231 return;
232 }
233 SkASSERT(x >= fMask.fBounds.fLeft -1);
234 // This must be called as if this is a real blitter.
235 // So we directly set alpha rather than adding it.
236 uint8_t* row = this->getRow(y);
237 for (int i=0; i<height; i++) {
238 row[x] = alpha;
239 row += fMask.fRowBytes;
240 }
241 }
242
243 void MaskAdditiveBlitter::blitRect(int x, int y, int width, int height) {
244 SkASSERT(x >= fMask.fBounds.fLeft -1);
245 // This must be called as if this is a real blitter.
246 // So we directly set alpha rather than adding it.
247 uint8_t* row = this->getRow(y);
248 for (int i=0; i<height; i++) {
249 memset(row + x, 0xFF, width);
250 row += fMask.fRowBytes;
251 }
252 }
253
254 void MaskAdditiveBlitter::blitAntiRect(int x, int y, int width, int height,
255 SkAlpha leftAlpha, SkAlpha rightAlpha) {
256 blitV(x, y, height, leftAlpha);
257 blitV(x + 1 + width, y, height, rightAlpha);
258 blitRect(x + 1, y, width, height);
259 }
260
261 class RunBasedAdditiveBlitter : public AdditiveBlitter {
262 public:
263 RunBasedAdditiveBlitter(SkBlitter* realBlitter, const SkIRect& ir, const SkR egion& clip,
264 bool isInverse);
265 ~RunBasedAdditiveBlitter();
266
267 SkBlitter* getRealBlitter(bool forceRealBlitter) override;
268
269 void blitAntiH(int x, int y, const SkAlpha antialias[], int len) override;
270 void blitAntiH(int x, int y, const SkAlpha alpha) override;
271 void blitAntiH(int x, int y, int width, const SkAlpha alpha) override;
272
273 int getWidth() override;
274
275 private:
276 SkBlitter* fRealBlitter;
277
278 /// Current y coordinate
279 int fCurrY;
280 /// Widest row of region to be blitted
281 int fWidth;
282 /// Leftmost x coordinate in any row
283 int fLeft;
284 /// Initial y coordinate (top of bounds).
285 int fTop;
286
287 // The next three variables are used to track a circular buffer that
288 // contains the values used in SkAlphaRuns. These variables should only
289 // ever be updated in advanceRuns(), and fRuns should always point to
290 // a valid SkAlphaRuns...
291 int fRunsToBuffer;
292 void* fRunsBuffer;
293 int fCurrentRun;
294 SkAlphaRuns fRuns;
295
296 int fOffsetX;
297
298 inline bool check(int x, int width) {
299 #ifdef SK_DEBUG
300 if (x < 0 || x + width > fWidth) {
301 SkDebugf("Ignore x = %d, width = %d\n", x, width);
302 }
303 #endif
304 return (x >= 0 && x + width <= fWidth);
305 }
306
307 // extra one to store the zero at the end
308 inline int getRunsSz() const { return (fWidth + 1 + (fWidth + 2)/2) * sizeof (int16_t); }
309
310 // This function updates the fRuns variable to point to the next buffer spac e
311 // with adequate storage for a SkAlphaRuns. It mostly just advances fCurrent Run
312 // and resets fRuns to point to an empty scanline.
313 inline void advanceRuns() {
314 const size_t kRunsSz = this->getRunsSz();
315 fCurrentRun = (fCurrentRun + 1) % fRunsToBuffer;
316 fRuns.fRuns = reinterpret_cast<int16_t*>(
317 reinterpret_cast<uint8_t*>(fRunsBuffer) + fCurrentRun * kRunsSz);
318 fRuns.fAlpha = reinterpret_cast<SkAlpha*>(fRuns.fRuns + fWidth + 1);
319 fRuns.reset(fWidth);
320 }
321
322 // Blitting 0xFF and 0 is much faster so we snap alphas close to them
323 inline SkAlpha snapAlpha(SkAlpha alpha) {
324 return alpha > 247 ? 0xFF : alpha < 8 ? 0 : alpha;
325 }
326
327 inline void flush() {
328 if (fCurrY >= fTop) {
329 SkASSERT(fCurrentRun < fRunsToBuffer);
330 for (int x = 0; fRuns.fRuns[x]; x += fRuns.fRuns[x]) {
331 // It seems that blitting 255 or 0 is much faster than blitting 254 or 1
332 fRuns.fAlpha[x] = snapAlpha(fRuns.fAlpha[x]);
333 }
334 if (!fRuns.empty()) {
335 // SkDEBUGCODE(fRuns.dump();)
336 fRealBlitter->blitAntiH(fLeft, fCurrY, fRuns.fAlpha, fRuns.fRuns );
337 this->advanceRuns();
338 fOffsetX = 0;
339 }
340 fCurrY = fTop - 1;
341 }
342 }
343
344 inline void checkY(int y) {
345 if (y != fCurrY) {
346 this->flush();
347 fCurrY = y;
348 }
349 }
350 };
351
352 RunBasedAdditiveBlitter::RunBasedAdditiveBlitter(SkBlitter* realBlitter, const S kIRect& ir, const SkRegion& clip,
353 bool isInverse) {
354 fRealBlitter = realBlitter;
355
356 SkIRect sectBounds;
357 if (isInverse) {
358 // We use the clip bounds instead of the ir, since we may be asked to
359 //draw outside of the rect when we're a inverse filltype
360 sectBounds = clip.getBounds();
361 } else {
362 if (!sectBounds.intersect(ir, clip.getBounds())) {
363 sectBounds.setEmpty();
364 }
365 }
366
367 const int left = sectBounds.left();
368 const int right = sectBounds.right();
369
370 fLeft = left;
371 fWidth = right - left;
372 fTop = sectBounds.top();
373 fCurrY = fTop - 1;
374
375 fRunsToBuffer = realBlitter->requestRowsPreserved();
376 fRunsBuffer = realBlitter->allocBlitMemory(fRunsToBuffer * this->getRunsSz() );
377 fCurrentRun = -1;
378
379 this->advanceRuns();
380
381 fOffsetX = 0;
382 }
383
384 RunBasedAdditiveBlitter::~RunBasedAdditiveBlitter() {
385 this->flush();
386 }
387
388 SkBlitter* RunBasedAdditiveBlitter::getRealBlitter(bool forceRealBlitter) {
389 return fRealBlitter;
390 }
391
392 void RunBasedAdditiveBlitter::blitAntiH(int x, int y, const SkAlpha antialias[], int len) {
393 checkY(y);
394 x -= fLeft;
395
396 if (x < 0) {
397 antialias -= x;
398 x = 0;
399 }
400 len = SkTMin(len, fWidth - x);
401 SkASSERT(check(x, len));
402
403 if (x < fOffsetX) {
404 fOffsetX = 0;
405 }
406
407 fOffsetX = fRuns.add(x, 0, len, 0, 0, fOffsetX); // Break the run
408 for (int i = 0; i < len; i += fRuns.fRuns[x + i]) {
409 for (int j = 1; j < fRuns.fRuns[x + i]; j++) {
410 fRuns.fRuns[x + i + j] = 1;
411 fRuns.fAlpha[x + i + j] = fRuns.fAlpha[x + i];
412 }
413 fRuns.fRuns[x + i] = 1;
414 }
415 for (int i=0; i<len; i++) {
416 addAlpha(fRuns.fAlpha[x + i], antialias[i]);
417 }
418 }
419 void RunBasedAdditiveBlitter::blitAntiH(int x, int y, const SkAlpha alpha) {
420 checkY(y);
421 x -= fLeft;
422
423 if (x < fOffsetX) {
424 fOffsetX = 0;
425 }
426
427 if (this->check(x, 1)) {
428 fOffsetX = fRuns.add(x, 0, 1, 0, alpha, fOffsetX);
429 }
430 }
431
432 void RunBasedAdditiveBlitter::blitAntiH(int x, int y, int width, const SkAlpha a lpha) {
433 checkY(y);
434 x -= fLeft;
435
436 if (x < fOffsetX) {
437 fOffsetX = 0;
438 }
439
440 if (this->check(x, width)) {
441 fOffsetX = fRuns.add(x, 0, width, 0, alpha, fOffsetX);
442 }
443 }
444
445 int RunBasedAdditiveBlitter::getWidth() { return fWidth; }
446
447 ///////////////////////////////////////////////////////////////////////////////
448
449 // Return the alpha of a trapezoid whose height is 1
450 inline SkAlpha trapezoidToAlpha(SkFixed l1, SkFixed l2) {
451 SkASSERT(l1 >= 0 && l2 >= 0);
452 return ((l1 + l2) >> 9);
453 }
454
455 struct PartialTriangleAlphaTable {
456 SkFixed table[(1 << 8) + 1][(1 << 9) + 1];
457
458 PartialTriangleAlphaTable() {
459 for (int i=0; i <= (1 << 8); i++) {
460 for (int j=0; j <= (1 << 9); j++) {
461 table[i][j] = (((i * i) >> 8) * j) >> 1;
462 }
463 }
464 }
465 };
466
467 class QuickPartialTriangleAlpha {
468 private:
469 static PartialTriangleAlphaTable table;
470 public:
471 static inline SkFixed lookup(SkFixed a, SkFixed b) {
472 SkASSERT(a <= SK_Fixed1 && b <= SK_Fixed1 << 1);
473 return table.table[a >> 8][b >> 8];
474 }
475 };
476
477 PartialTriangleAlphaTable QuickPartialTriangleAlpha::table;
478
479 // The alpha of right-triangle (a, a*b), in 16 bits
480 inline SkFixed partialTriangleToAlpha16(SkFixed a, SkFixed b) {
481 SkASSERT(a <= SK_Fixed1);
482 if (b <= SK_Fixed1 << 1) {
483 return QuickPartialTriangleAlpha::lookup(a, b);
484 } else {
485 // This is unlikely to happen because when b is large, we would
486 // mostly end up in trapezoidToAlpha rather than computing triangles
487 return ((((a >> 8) * (a >> 8)) >> 8) * (b >> 8)) >> 1;
488 }
489 }
490
491 // The alpha of right-triangle (a, a*b)
492 inline SkAlpha partialTriangleToAlpha(SkFixed a, SkFixed b) {
493 // (a >> 8) * (a >> 8) * (b >> 8) >> 9
494 return partialTriangleToAlpha16(a, b) >> 8;
495 }
496
497 inline SkAlpha getPartialAlpha(SkAlpha alpha, SkFixed partialHeight) {
498 return (alpha * partialHeight) >> 16;
499 }
500
501 inline SkAlpha getPartialAlpha(SkAlpha alpha, SkAlpha fullAlpha) {
502 return ((uint16_t)alpha * fullAlpha) >> 8;
503 }
504
505 // Suppose that line (l1, y)-(r1, y+1) intersects with (l2, y)-(r2, y+1),
506 // approximate (very coarsely) the x coordinate of the intersection.
507 inline SkFixed approximateIntersection(SkFixed l1, SkFixed r1, SkFixed l2, SkFix ed r2) {
508 if (l1 > r1) { SkTSwap(l1, r1); }
509 if (l2 > r2) { SkTSwap(l2, r2); }
510 return (SkTMax(l1, l2) + SkTMin(r1, r2)) >> 1;
511 }
512
513 // Here we always send in l < SK_Fixed1, and the first alpha we want to compute is alphas[0]
514 inline void computeAlphaAboveLine(SkAlpha* alphas, SkFixed l, SkFixed r, SkFixed dY, SkAlpha fullAlpha) {
515 SkASSERT(l <= r);
516 SkASSERT(l >> 16 == 0);
517 int R = SkFixedCeilToInt(r);
518 if (R == 0) {
519 return;
520 } else if (R == 1) {
521 alphas[0] = getPartialAlpha(((R << 17) - l - r) >> 9, fullAlpha);
522 } else {
523 SkFixed first = SK_Fixed1 - l; // horizontal edge length of the left-mos t triangle
524 SkFixed last = r - ((R - 1) << 16); // horizontal edge length of the rig ht-most triangle
525 SkFixed firstH = SkFixedMul_lowprec(first, dY); // vertical edge of the left-most triangle
526 alphas[0] = SkFixedMul_lowprec(first, firstH) >> 9; // triangle alpha
527 SkFixed alpha16 = firstH + (dY >> 1); // rectangle plus triangle
528 for (int i = 1; i < R - 1; i++) {
529 alphas[i] = alpha16 >> 8;
530 alpha16 += dY;
531 }
532 alphas[R - 1] = fullAlpha - partialTriangleToAlpha(last, dY);
533 }
534 }
535
536 // Here we always send in l < SK_Fixed1, and the first alpha we want to compute is alphas[0]
537 inline void computeAlphaBelowLine(SkAlpha* alphas, SkFixed l, SkFixed r, SkFixed dY, SkAlpha fullAlpha) {
538 SkASSERT(l <= r);
539 SkASSERT(l >> 16 == 0);
540 int R = SkFixedCeilToInt(r);
541 if (R == 0) {
542 return;
543 } else if (R == 1) {
544 alphas[0] = getPartialAlpha(trapezoidToAlpha(l, r), fullAlpha);
545 } else {
546 SkFixed first = SK_Fixed1 - l; // horizontal edge length of the left-mos t triangle
547 SkFixed last = r - ((R - 1) << 16); // horizontal edge length of the rig ht-most triangle
548 SkFixed lastH = SkFixedMul_lowprec(last, dY); // vertical edge of the ri ght-most triangle
549 alphas[R-1] = SkFixedMul_lowprec(last, lastH) >> 9; // triangle alpha
550 SkFixed alpha16 = lastH + (dY >> 1); // rectangle plus triangle
551 for (int i = R - 2; i > 0; i--) {
552 alphas[i] = alpha16 >> 8;
553 alpha16 += dY;
554 }
555 alphas[0] = fullAlpha - partialTriangleToAlpha(first, dY);
556 }
557 }
558
559 // Note that if fullAlpha != 0xFF, we'll multiply alpha by fullAlpha
560 inline void blit_single_alpha(AdditiveBlitter* blitter, int y, int x,
561 SkAlpha alpha, SkAlpha fullAlpha, SkAlpha* maskRow ,
562 bool isUsingMask) {
563 if (isUsingMask) {
564 if (fullAlpha == 0xFF) {
565 maskRow[x] = alpha;
566 } else {
567 addAlpha(maskRow[x], getPartialAlpha(alpha, fullAlpha));
568 }
569 } else {
570 if (fullAlpha == 0xFF) {
571 blitter->getRealBlitter()->blitV(x, y, 1, alpha);
572 } else {
573 blitter->blitAntiH(x, y, getPartialAlpha(alpha, fullAlpha));
574 }
575 }
576 }
577
578 inline void blit_two_alphas(AdditiveBlitter* blitter, int y, int x,
579 SkAlpha a1, SkAlpha a2, SkAlpha fullAlpha, SkAlpha* maskRow,
580 bool isUsingMask) {
581 if (isUsingMask) {
582 addAlpha(maskRow[x], a1);
583 addAlpha(maskRow[x + 1], a2);
584 } else {
585 if (fullAlpha == 0xFF) {
586 blitter->getRealBlitter()->blitV(x, y, 1, a1);
587 blitter->getRealBlitter()->blitV(x + 1, y, 1, a2);
588 } else {
589 blitter->blitAntiH(x, y, a1);
590 blitter->blitAntiH(x + 1, y, a2);
591 }
592 }
593 }
594
595 // It's important that this is inline. Otherwise it'll be much slower.
596 SK_ALWAYS_INLINE void blit_full_alpha(AdditiveBlitter* blitter, int y, int x, in t len,
597 SkAlpha fullAlpha, SkAlpha* maskRow, bool isUsingMas k) {
598 if (isUsingMask) {
599 for (int i=0; i<len; i++) {
600 addAlpha(maskRow[x + i], fullAlpha);
601 }
602 } else {
603 if (fullAlpha == 0xFF) {
604 blitter->getRealBlitter()->blitH(x, y, len);
605 } else {
606 blitter->blitAntiH(x, y, len, fullAlpha);
607 }
608 }
609 }
610
611 void blit_aaa_trapezoid_row(AdditiveBlitter* blitter, int y,
612 SkFixed ul, SkFixed ur, SkFixed ll, SkFixed lr,
613 SkFixed lDY, SkFixed rDY, SkAlpha fullAlpha, SkAlpha * maskRow,
614 bool isUsingMask) {
615 int L = SkFixedFloorToInt(ul), R = SkFixedCeilToInt(lr);
616 int len = R - L;
617
618 if (len == 1) {
619 SkAlpha alpha = trapezoidToAlpha(ur - ul, lr - ll);
620 blit_single_alpha(blitter, y, L, alpha, fullAlpha, maskRow, isUsingMask) ;
621 return;
622 }
623
624 // SkDebugf("y = %d, len = %d, ul = %f, ur = %f, ll = %f, lr = %f\n", y, len ,
625 // SkFixedToFloat(ul), SkFixedToFloat(ur), SkFixedToFloat(ll), SkFix edToFloat(lr));
626
627 const int kQuickLen = 31;
628 // This is faster than SkAutoSMalloc<1024>
629 char quickMemory[(sizeof(SkAlpha) * 2 + sizeof(int16_t)) * (kQuickLen + 1)];
630 SkAlpha* alphas;
631
632 if (len <= kQuickLen) {
633 alphas = (SkAlpha*)quickMemory;
634 } else {
635 alphas = new SkAlpha[(len + 1) * (sizeof(SkAlpha) * 2 + sizeof(int16_t)) ];
636 }
637
638 SkAlpha* tempAlphas = alphas + len + 1;
639 int16_t* runs = (int16_t*)(alphas + (len + 1) * 2);
640
641 for (int i = 0; i < len; i++) {
642 runs[i] = 1;
643 alphas[i] = fullAlpha;
644 }
645 runs[len] = 0;
646
647 int uL = SkFixedFloorToInt(ul);
648 int lL = SkFixedCeilToInt(ll);
649 if (uL + 2 == lL) { // We only need to compute two triangles, accelerate thi s special case
650 SkFixed first = (uL << 16) + SK_Fixed1 - ul;
651 SkFixed second = ll - ul - first;
652 SkAlpha a1 = fullAlpha - partialTriangleToAlpha(first, lDY);
653 SkAlpha a2 = partialTriangleToAlpha(second, lDY);
654 alphas[0] = alphas[0] > a1 ? alphas[0] - a1 : 0;
655 alphas[1] = alphas[1] > a2 ? alphas[1] - a2 : 0;
656 } else {
657 computeAlphaBelowLine(tempAlphas + uL - L, ul - (uL << 16), ll - (uL << 16),
658 lDY, fullAlpha);
659 for (int i = uL; i < lL; i++) {
660 if (alphas[i - L] > tempAlphas[i - L]) {
661 alphas[i - L] -= tempAlphas[i - L];
662 } else {
663 alphas[i - L] = 0;
664 }
665 }
666 }
667
668 int uR = SkFixedFloorToInt(ur);
669 int lR = SkFixedCeilToInt(lr);
670 if (uR + 2 == lR) { // We only need to compute two triangles, accelerate thi s special case
671 SkFixed first = (uR << 16) + SK_Fixed1 - ur;
672 SkFixed second = lr - ur - first;
673 SkAlpha a1 = partialTriangleToAlpha(first, rDY);
674 SkAlpha a2 = fullAlpha - partialTriangleToAlpha(second, rDY);
675 alphas[len-2] = alphas[len-2] > a1 ? alphas[len-2] - a1 : 0;
676 alphas[len-1] = alphas[len-1] > a2 ? alphas[len-1] - a2 : 0;
677 } else {
678 computeAlphaAboveLine(tempAlphas + uR - L, ur - (uR << 16), lr - (uR << 16),
679 rDY, fullAlpha);
680 for (int i = uR; i < lR; i++) {
681 if (alphas[i - L] > tempAlphas[i - L]) {
682 alphas[i - L] -= tempAlphas[i - L];
683 } else {
684 alphas[i - L] = 0;
685 }
686 }
687 }
688
689 if (isUsingMask) {
690 for (int i=0; i<len; i++) {
691 addAlpha(maskRow[L + i], alphas[i]);
692 }
693 } else {
694 if (fullAlpha == 0xFF) { // Real blitter is faster than RunBasedAdditive Blitter
695 blitter->getRealBlitter()->blitAntiH(L, y, alphas, runs);
696 } else {
697 blitter->blitAntiH(L, y, alphas, len);
698 }
699 }
700
701 if (len > kQuickLen) {
702 delete [] alphas;
703 }
704 }
705
706 inline void blit_trapezoid_row(AdditiveBlitter* blitter, int y,
707 SkFixed ul, SkFixed ur, SkFixed ll, SkFixed lr,
708 SkFixed lDY, SkFixed rDY, SkAlpha fullAlpha,
709 SkAlpha* maskRow, bool isUsingMask) {
710 SkASSERT(lDY >= 0 && rDY >= 0); // We should only send in the absolte value
711
712 if (ul > ur) {
713 #ifdef SK_DEBUG
714 SkDebugf("ul = %f > ur = %f!\n", SkFixedToFloat(ul), SkFixedToFloat(ur)) ;
715 #endif
716 return;
717 }
718
719 // Edge crosses. Approximate it. This should only happend due to precision l imit,
720 // so the approximation could be very coarse.
721 if (ll > lr) {
722 #ifdef SK_DEBUG
723 SkDebugf("approximate intersection: %d %f %f\n", y,
724 SkFixedToFloat(ll), SkFixedToFloat(lr));
725 #endif
726 ll = lr = approximateIntersection(ul, ll, ur, lr);
727 }
728
729 if (ul == ur && ll == lr) {
730 return; // empty trapzoid
731 }
732
733 // We're going to use the left line ul-ll and the rite line ur-lr
734 // to exclude the area that's not covered by the path.
735 // Swapping (ul, ll) or (ur, lr) won't affect that exclusion
736 // so we'll do that for simplicity.
737 if (ul > ll) { SkTSwap(ul, ll); }
738 if (ur > lr) { SkTSwap(ur, lr); }
739
740 SkFixed joinLeft = SkFixedCeilToFixed(ll);
741 SkFixed joinRite = SkFixedFloorToFixed(ur);
742 if (joinLeft <= joinRite) { // There's a rect from joinLeft to joinRite that we can blit
743 if (joinLeft < joinRite) {
744 blit_full_alpha(blitter, y, joinLeft >> 16, (joinRite - joinLeft) >> 16, fullAlpha,
745 maskRow, isUsingMask);
746 }
747 if (ul < joinLeft) {
748 int len = SkFixedCeilToInt(joinLeft - ul);
749 if (len == 1) {
750 SkAlpha alpha = trapezoidToAlpha(joinLeft - ul, joinLeft - ll);
751 blit_single_alpha(blitter, y, ul >> 16, alpha, fullAlpha, maskRo w, isUsingMask);
752 } else if (len == 2) {
753 SkFixed first = joinLeft - SK_Fixed1 - ul;
754 SkFixed second = ll - ul - first;
755 SkAlpha a1 = partialTriangleToAlpha(first, lDY);
756 SkAlpha a2 = fullAlpha - partialTriangleToAlpha(second, lDY);
757 blit_two_alphas(blitter, y, ul >> 16, a1, a2, fullAlpha, maskRow , isUsingMask);
758 } else {
759 blit_aaa_trapezoid_row(blitter, y, ul, joinLeft, ll, joinLeft, l DY, SK_MaxS32,
760 fullAlpha, maskRow, isUsingMask);
761 }
762 }
763 if (lr > joinRite) {
764 int len = SkFixedCeilToInt(lr - joinRite);
765 if (len == 1) {
766 SkAlpha alpha = trapezoidToAlpha(ur - joinRite, lr - joinRite);
767 blit_single_alpha(blitter, y, joinRite >> 16, alpha, fullAlpha, maskRow,
768 isUsingMask);
769 } else if (len == 2) {
770 SkFixed first = joinRite + SK_Fixed1 - ur;
771 SkFixed second = lr - ur - first;
772 SkAlpha a1 = fullAlpha - partialTriangleToAlpha(first, rDY);
773 SkAlpha a2 = partialTriangleToAlpha(second, rDY);
774 blit_two_alphas(blitter, y, joinRite >> 16, a1, a2, fullAlpha, m askRow,
775 isUsingMask);
776 } else {
777 blit_aaa_trapezoid_row(blitter, y, joinRite, ur, joinRite, lr, S K_MaxS32, rDY,
778 fullAlpha, maskRow, isUsingMask);
779 }
780 }
781 } else {
782 blit_aaa_trapezoid_row(blitter, y, ul, ur, ll, lr, lDY, rDY, fullAlpha, maskRow,
783 isUsingMask);
784 }
785 }
786
787 ///////////////////////////////////////////////////////////////////////////////
788
789 static bool operator<(const SkAnalyticEdge& a, const SkAnalyticEdge& b) {
790 int valuea = a.fUpperY;
791 int valueb = b.fUpperY;
792
793 if (valuea == valueb) {
794 valuea = a.fX;
795 valueb = b.fX;
796 }
797
798 if (valuea == valueb) {
799 valuea = a.fDX;
800 valueb = b.fDX;
801 }
802
803 return valuea < valueb;
804 }
805
806 static SkAnalyticEdge* sort_edges(SkAnalyticEdge* list[], int count, SkAnalyticE dge** last) {
807 SkTQSort(list, list + count - 1);
808
809 // now make the edges linked in sorted order
810 for (int i = 1; i < count; i++) {
811 list[i - 1]->fNext = list[i];
812 list[i]->fPrev = list[i - 1];
813 }
814
815 *last = list[count - 1];
816 return list[0];
817 }
818
819 #ifdef SK_DEBUG
820 static void validate_sort(const SkAnalyticEdge* edge) {
821 SkFixed y = SkIntToFixed(-32768);
822
823 while (edge->fUpperY != SK_MaxS32) {
824 edge->validate();
825 SkASSERT(y <= edge->fUpperY);
826
827 y = edge->fUpperY;
828 edge = (SkAnalyticEdge*)edge->fNext;
829 }
830 }
831 #else
832 #define validate_sort(edge)
833 #endif
834
835 // return true if we're done with this edge
836 static bool update_edge(SkAnalyticEdge* edge, SkFixed last_y) {
837 if (last_y >= edge->fLowerY) {
838 if (edge->fCurveCount < 0) {
839 if (static_cast<SkAnalyticCubicEdge*>(edge)->updateCubic()) {
840 return false;
841 }
842 } else if (edge->fCurveCount > 0) {
843 if (static_cast<SkAnalyticQuadraticEdge*>(edge)->updateQuadratic()) {
844 return false;
845 }
846 }
847 return true;
848 }
849 SkASSERT(false);
850 return false;
851 }
852
853 // For an edge, we consider it smooth if the Dx doesn't change much, and Dy is l arge enough
854 // For curves that are updating, the Dx is not changing much if fQDx/fCDx and fQ Dy/fCDy are
855 // relatively large compared to fQDDx/QCDDx and fQDDy/fCDDy
856 inline bool isSmoothEnough(SkAnalyticEdge* thisEdge, SkAnalyticEdge* nextEdge, i nt stop_y) {
857 if (thisEdge->fCurveCount < 0) {
858 SkAnalyticCubicEdge* cubicEdge = static_cast<SkAnalyticCubicEdge*>(thisE dge);
859 return cubicEdge->fCDx >> 1 >= cubicEdge->fCDDx >> cubicEdge->fCurveShif t &&
860 cubicEdge->fCDy >> 1 >= cubicEdge->fCDDy >> cubicEdge->fCurveShi ft &&
861 // current Dy is (fCDy - (fCDDy >> ddshift)) >> dshift
862 (cubicEdge->fCDy - (cubicEdge->fCDDy >> cubicEdge->fCurveShift))
863 >> cubicEdge->fCubicDShift
864 >= SK_Fixed1;
865 } else if (thisEdge->fCurveCount > 0) {
866 SkAnalyticQuadraticEdge* quadraticEdge = static_cast<SkAnalyticQuadratic Edge*>(thisEdge);
867 return quadraticEdge->fQDx >> 1 >= quadraticEdge->fQDDx &&
868 quadraticEdge->fQDy >> 1 >= quadraticEdge->fQDDy &&
869 // current Dy is (fQDy - fQDDy) >> shift
870 (quadraticEdge->fQDy - quadraticEdge->fQDDy) >> quadraticEdge->f CurveShift
871 >= SK_Fixed1;
872 }
873 return SkAbs32(nextEdge->fDX - thisEdge->fDX) <= SK_Fixed1 && // DDx should be small
874 nextEdge->fLowerY - nextEdge->fUpperY >= SK_Fixed1; // Dy should be large
875 }
876
877 // Check if the leftE and riteE are changing smoothly in terms of fDX.
878 // If yes, we can later skip the fractional y and directly jump to integer y.
879 inline bool isSmoothEnough(SkAnalyticEdge* leftE, SkAnalyticEdge* riteE,
880 SkAnalyticEdge* currE, int stop_y) {
881 if (currE->fUpperY >= stop_y << 16) {
882 return false; // We're at the end so we won't skip anything
883 }
884 if (leftE->fLowerY + SK_Fixed1 < riteE->fLowerY) {
885 return isSmoothEnough(leftE, currE, stop_y); // Only leftE is changing
886 } else if (leftE->fLowerY > riteE->fLowerY + SK_Fixed1) {
887 return isSmoothEnough(riteE, currE, stop_y); // Only riteE is changing
888 }
889
890 // Now both edges are changing, find the second next edge
891 SkAnalyticEdge* nextCurrE = currE->fNext;
892 if (nextCurrE->fUpperY >= stop_y << 16) { // Check if we're at the end
893 return false;
894 }
895 if (*nextCurrE < *currE) {
896 SkTSwap(currE, nextCurrE);
897 }
898 return isSmoothEnough(leftE, currE, stop_y) && isSmoothEnough(riteE, nextCur rE, stop_y);
899 }
900
901 inline void aaa_walk_convex_edges(SkAnalyticEdge* prevHead, AdditiveBlitter* bli tter,
902 int start_y, int stop_y, SkFixed leftBound, SkFixed r iteBound,
903 bool isUsingMask) {
904 validate_sort((SkAnalyticEdge*)prevHead->fNext);
905
906 SkAnalyticEdge* leftE = (SkAnalyticEdge*) prevHead->fNext;
907 SkAnalyticEdge* riteE = (SkAnalyticEdge*) leftE->fNext;
908 SkAnalyticEdge* currE = (SkAnalyticEdge*) riteE->fNext;
909
910 SkFixed y = SkTMax(leftE->fUpperY, riteE->fUpperY);
911
912 #ifdef SK_DEBUG
913 int frac_y_cnt = 0;
914 int total_y_cnt = 0;
915 #endif
916
917 for (;;) {
918 SkASSERT(SkFixedFloorToInt(leftE->fUpperY) <= stop_y);
919 SkASSERT(SkFixedFloorToInt(riteE->fUpperY) <= stop_y);
920
921 leftE->goY(y);
922 riteE->goY(y);
923
924 if (leftE->fX > riteE->fX || (leftE->fX == riteE->fX &&
925 leftE->fDX > riteE->fDX)) {
926 SkTSwap(leftE, riteE);
927 }
928
929 SkFixed local_bot_fixed = SkMin32(leftE->fLowerY, riteE->fLowerY);
930 // Skip the fractional y if edges are changing smoothly
931 if (isSmoothEnough(leftE, riteE, currE, stop_y)) {
932 local_bot_fixed = SkFixedCeilToFixed(local_bot_fixed);
933 }
934 local_bot_fixed = SkMin32(local_bot_fixed, SkIntToFixed(stop_y + 1));
935
936 SkFixed left = leftE->fX;
937 SkFixed dLeft = leftE->fDX;
938 SkFixed rite = riteE->fX;
939 SkFixed dRite = riteE->fDX;
940 if (0 == (dLeft | dRite)) {
941 int fullLeft = SkFixedCeilToInt(left);
942 int fullRite = SkFixedFloorToInt(rite);
943 SkFixed partialLeft = SkIntToFixed(fullLeft) - left;
944 SkFixed partialRite = rite - SkIntToFixed(fullRite);
945 int fullTop = SkFixedCeilToInt(y);
946 int fullBot = SkFixedFloorToInt(local_bot_fixed);
947 SkFixed partialTop = SkIntToFixed(fullTop) - y;
948 SkFixed partialBot = local_bot_fixed - SkIntToFixed(fullBot);
949
950 if (fullRite >= fullLeft) {
951 // Blit all full-height rows from fullTop to fullBot
952 if (fullBot > fullTop) {
953 blitter->getRealBlitter()->blitAntiRect(fullLeft - 1, fullTo p, fullRite - fullLeft,
954 fullBot - fullTop,
955 partialLeft >> 8, pa rtialRite >> 8);
956 }
957
958 if (partialTop > 0) { // blit first partial row
959 if (partialLeft > 0) {
960 blitter->blitAntiH(fullLeft - 1, fullTop - 1,
961 SkFixedMul_lowprec(partialTop, partia lLeft) >> 8);
962 }
963 if (partialRite > 0) {
964 blitter->blitAntiH(fullRite, fullTop - 1,
965 SkFixedMul_lowprec(partialTop, partia lRite) >> 8);
966 }
967 blitter->blitAntiH(fullLeft, fullTop - 1, fullRite - fullLef t, partialTop >> 8);
968 }
969
970 if (partialBot > 0) { // blit last partial row
971 if (partialLeft > 0) {
972 blitter->blitAntiH(fullLeft - 1, fullBot,
973 SkFixedMul_lowprec(partialBot, partia lLeft) >> 8);
974 }
975 if (partialRite > 0) {
976 blitter->blitAntiH(fullRite, fullBot,
977 SkFixedMul_lowprec(partialBot, partia lRite) >> 8);
978 }
979 blitter->blitAntiH(fullLeft, fullBot, fullRite - fullLeft, p artialBot >> 8);
980 }
981 } else { // left and rite are within the same pixel
982 if (partialTop > 0) {
983 blitter->getRealBlitter()->blitV(fullLeft - 1, fullTop - 1, 1,
984 SkFixedMul_lowprec(partialTop, rite - left) >> 8);
985 }
986 if (partialBot > 0) {
987 blitter->getRealBlitter()->blitV(fullLeft - 1, fullBot, 1,
988 SkFixedMul_lowprec(partialBot, rite - left) >> 8);
989 }
990 if (fullBot >= fullTop) {
991 blitter->getRealBlitter()->blitV(fullLeft - 1, fullTop, full Bot - fullTop,
992 (rite - left) >> 8);
993 }
994 }
995
996 y = local_bot_fixed;
997 } else {
998 // The following constant are used to snap X
999 // We snap X mainly for speedup (no tiny triangle) and
1000 // avoiding edge cases caused by precision errors
1001 const SkFixed kSnapDigit = SK_Fixed1 >> 4;
1002 const SkFixed kSnapHalf = kSnapDigit >> 1;
1003 const SkFixed kSnapMask = (-1 ^ (kSnapDigit - 1));
1004 left += kSnapHalf, rite += kSnapHalf; // For fast rounding
1005
1006 // Number of blit_trapezoid_row calls we'll have
1007 int count = SkFixedCeilToInt(local_bot_fixed) - SkFixedFloorToInt(y) ;
1008 #ifdef SK_DEBUG
1009 total_y_cnt += count;
1010 frac_y_cnt += ((int)(y & 0xFFFF0000) != y);
1011 #endif
1012
1013 // If we're using mask blitter, we advance the mask row in this func tion
1014 // to save some "if" condition checks.
1015 SkAlpha* maskRow = nullptr;
1016 if (isUsingMask) {
1017 maskRow = static_cast<MaskAdditiveBlitter*>(blitter)->getRow(y > > 16);
1018 }
1019
1020 // Instead of writing one loop that handles both partial-row blit_tr apezoid_row
1021 // and full-row trapezoid_row together, we use the following 3-stage flow to
1022 // handle partial-row blit and full-row blit separately. It will sav e us much time
1023 // on changing y, left, and rite.
1024 if (count > 1) {
1025 if ((int)(y & 0xFFFF0000) != y) { // There's a partial-row on th e top
1026 count--;
1027 SkFixed nextY = SkFixedCeilToFixed(y + 1);
1028 SkFixed dY = nextY - y;
1029 SkFixed nextLeft = left + SkFixedMul_lowprec(dLeft, dY);
1030 SkFixed nextRite = rite + SkFixedMul_lowprec(dRite, dY);
1031 blit_trapezoid_row(blitter, y >> 16, left & kSnapMask, rite & kSnapMask,
1032 nextLeft & kSnapMask, nextRite & kSnapMask, leftE->f DY, riteE->fDY,
1033 getPartialAlpha(0xFF, dY), maskRow, isUsingMask);
1034 left = nextLeft, rite = nextRite, y = nextY;
1035 }
1036
1037 while (count > 1) { // Full rows in the middle
1038 count--;
1039 if (isUsingMask) {
1040 maskRow = static_cast<MaskAdditiveBlitter*>(blitter)->ge tRow(y >> 16);
1041 }
1042 SkFixed nextY = y + SK_Fixed1, nextLeft = left + dLeft, next Rite = rite + dRite;
1043 blit_trapezoid_row(blitter, y >> 16, left & kSnapMask, rite & kSnapMask,
1044 nextLeft & kSnapMask, nextRite & kSnapMask,
1045 leftE->fDY, riteE->fDY, 0xFF, maskRow, isUsingMask);
1046 left = nextLeft, rite = nextRite, y = nextY;
1047 }
1048 }
1049
1050 if (isUsingMask) {
1051 maskRow = static_cast<MaskAdditiveBlitter*>(blitter)->getRow(y > > 16);
1052 }
1053
1054 SkFixed dY = local_bot_fixed - y; // partial-row on the bottom
1055 SkASSERT(dY <= SK_Fixed1);
1056 // Smooth jumping to integer y may make the last nextLeft/nextRite o ut of bound.
1057 // Take them back into the bound here.
1058 SkFixed nextLeft = SkTMax(left + SkFixedMul_lowprec(dLeft, dY), left Bound);
1059 SkFixed nextRite = SkTMin(rite + SkFixedMul_lowprec(dRite, dY), rite Bound);
1060 blit_trapezoid_row(blitter, y >> 16, left & kSnapMask, rite & kSnapM ask,
1061 nextLeft & kSnapMask, nextRite & kSnapMask, leftE->fDY, rite E->fDY,
1062 getPartialAlpha(0xFF, dY), maskRow, isUsingMask);
1063 left = nextLeft, rite = nextRite, y = local_bot_fixed;
1064 left -= kSnapHalf, rite -= kSnapHalf;
1065 }
1066
1067 leftE->fX = left;
1068 riteE->fX = rite;
1069 leftE->fY = riteE->fY = y;
1070
1071 while (leftE->fLowerY <= y) { // Due to smooth jump, we may pass multipl e short edges
1072 if (update_edge(leftE, y)) {
1073 if (SkFixedFloorToInt(currE->fUpperY) >= stop_y) {
1074 goto END_WALK;
1075 }
1076 leftE = currE;
1077 currE = (SkAnalyticEdge*)currE->fNext;
1078 }
1079 }
1080 while (riteE->fLowerY <= y) { // Due to smooth jump, we may pass multipl e short edges
1081 if (update_edge(riteE, y)) {
1082 if (SkFixedFloorToInt(currE->fUpperY) >= stop_y) {
1083 goto END_WALK;
1084 }
1085 riteE = currE;
1086 currE = (SkAnalyticEdge*)currE->fNext;
1087 }
1088 }
1089
1090 SkASSERT(leftE);
1091 SkASSERT(riteE);
1092
1093 // check our bottom clip
1094 SkASSERT(y == local_bot_fixed);
1095 if (SkFixedFloorToInt(y) >= stop_y) {
1096 break;
1097 }
1098 }
1099
1100 END_WALK:
1101 ;
1102 #ifdef SK_DEBUG
1103 SkDebugf("frac_y_cnt = %d, total_y_cnt = %d\n", frac_y_cnt, total_y_cnt);
1104 #endif
1105 }
1106
1107 void SkScan::aaa_fill_path(const SkPath& path, const SkIRect* clipRect, Additive Blitter* blitter,
1108 int start_y, int stop_y, const SkRegion& clipRgn, bool isUsin gMask) {
1109 SkASSERT(blitter);
1110
1111 if (path.isInverseFillType() || !path.isConvex()) {
1112 // fall back to supersampling AA
1113 SkScan::AntiFillPath(path, clipRgn, blitter->getRealBlitter(true), false );
1114 return;
1115 }
1116
1117 SkEdgeBuilder builder;
1118
1119 // If we're convex, then we need both edges, even the right edge is past the clip
1120 const bool canCullToTheRight = !path.isConvex();
1121
1122 SkASSERT(GlobalAAConfig::getInstance().fUseAnalyticAA);
1123 int count = builder.build(path, clipRect, 0, canCullToTheRight, true);
1124 SkASSERT(count >= 0);
1125
1126 SkAnalyticEdge** list = (SkAnalyticEdge**)builder.analyticEdgeList();
1127
1128 SkIRect rect = clipRgn.getBounds();
1129 if (0 == count) {
1130 if (path.isInverseFillType()) {
1131 /*
1132 * Since we are in inverse-fill, our caller has already drawn above
1133 * our top (start_y) and will draw below our bottom (stop_y). Thus
1134 * we need to restrict our drawing to the intersection of the clip
1135 * and those two limits.
1136 */
1137 if (rect.fTop < start_y) {
1138 rect.fTop = start_y;
1139 }
1140 if (rect.fBottom > stop_y) {
1141 rect.fBottom = stop_y;
1142 }
1143 if (!rect.isEmpty()) {
1144 blitter->blitRect(rect.fLeft, rect.fTop, rect.width(), rect.heig ht());
1145 }
1146 }
1147 return;
1148 }
1149
1150 SkAnalyticEdge headEdge, tailEdge, *last;
1151 // this returns the first and last edge after they're sorted into a dlink li st
1152 SkAnalyticEdge* edge = sort_edges(list, count, &last);
1153
1154 headEdge.fPrev = nullptr;
1155 headEdge.fNext = edge;
1156 headEdge.fUpperY = headEdge.fLowerY = SK_MinS32;
1157 headEdge.fX = SK_MinS32;
1158 headEdge.fDX = 0;
1159 headEdge.fDY = SK_MaxS32;
1160 headEdge.fUpperX = SK_MinS32;
1161 edge->fPrev = &headEdge;
1162
1163 tailEdge.fPrev = last;
1164 tailEdge.fNext = nullptr;
1165 tailEdge.fUpperY = tailEdge.fLowerY = SK_MaxS32;
1166 headEdge.fX = SK_MaxS32;
1167 headEdge.fDX = 0;
1168 headEdge.fDY = SK_MaxS32;
1169 headEdge.fUpperX = SK_MaxS32;
1170 last->fNext = &tailEdge;
1171
1172 // now edge is the head of the sorted linklist
1173
1174 if (clipRect && start_y < clipRect->fTop) {
1175 start_y = clipRect->fTop;
1176 }
1177 if (clipRect && stop_y > clipRect->fBottom) {
1178 stop_y = clipRect->fBottom;
1179 }
1180
1181 if (!path.isInverseFillType() && path.isConvex()) {
1182 SkASSERT(count >= 2); // convex walker does not handle missing right e dges
1183 aaa_walk_convex_edges(&headEdge, blitter, start_y, stop_y,
1184 rect.fLeft << 16, rect.fRight << 16, isUsingMask);
1185 } else {
1186 SkFAIL("Concave AAA is not yet implemented!");
1187 }
1188 }
1189
1190 ///////////////////////////////////////////////////////////////////////////////
1191
1192 void SkScan::AAAFillPath(const SkPath& path, const SkRegion& origClip, SkBlitter * blitter) {
1193 if (origClip.isEmpty()) {
1194 return;
1195 }
1196
1197 const bool isInverse = path.isInverseFillType();
1198 SkIRect ir;
1199 path.getBounds().roundOut(&ir);
1200 if (ir.isEmpty()) {
1201 if (isInverse) {
1202 blitter->blitRegion(origClip);
1203 }
1204 return;
1205 }
1206
1207 SkIRect clippedIR;
1208 if (isInverse) {
1209 // If the path is an inverse fill, it's going to fill the entire
1210 // clip, and we care whether the entire clip exceeds our limits.
1211 clippedIR = origClip.getBounds();
1212 } else {
1213 if (!clippedIR.intersect(ir, origClip.getBounds())) {
1214 return;
1215 }
1216 }
1217
1218 // Our antialiasing can't handle a clip larger than 32767, so we restrict
1219 // the clip to that limit here. (the runs[] uses int16_t for its index).
1220 //
1221 // A more general solution (one that could also eliminate the need to
1222 // disable aa based on ir bounds (see overflows_short_shift) would be
1223 // to tile the clip/target...
1224 SkRegion tmpClipStorage;
1225 const SkRegion* clipRgn = &origClip;
1226 {
1227 static const int32_t kMaxClipCoord = 32767;
1228 const SkIRect& bounds = origClip.getBounds();
1229 if (bounds.fRight > kMaxClipCoord || bounds.fBottom > kMaxClipCoord) {
1230 SkIRect limit = { 0, 0, kMaxClipCoord, kMaxClipCoord };
1231 tmpClipStorage.op(origClip, limit, SkRegion::kIntersect_Op);
1232 clipRgn = &tmpClipStorage;
1233 }
1234 }
1235 // for here down, use clipRgn, not origClip
1236
1237 SkScanClipper clipper(blitter, clipRgn, ir);
1238 const SkIRect* clipRect = clipper.getClipRect();
1239
1240 if (clipper.getBlitter() == nullptr) { // clipped out
1241 if (isInverse) {
1242 blitter->blitRegion(*clipRgn);
1243 }
1244 return;
1245 }
1246
1247 // now use the (possibly wrapped) blitter
1248 blitter = clipper.getBlitter();
1249
1250 if (isInverse) {
1251 // Currently, we use the old path to render the inverse path,
1252 // so we don't need this.
1253 // sk_blit_above(blitter, ir, *clipRgn);
1254 }
1255
1256 SkASSERT(SkIntToScalar(ir.fTop) <= path.getBounds().fTop);
1257
1258 if (MaskAdditiveBlitter::canHandleRect(ir) && !isInverse) {
1259 MaskAdditiveBlitter additiveBlitter(blitter, ir, *clipRgn, isInverse);
1260 aaa_fill_path(path, clipRect, &additiveBlitter, ir.fTop, ir.fBottom, *cl ipRgn, true);
1261 } else {
1262 RunBasedAdditiveBlitter additiveBlitter(blitter, ir, *clipRgn, isInverse );
1263 aaa_fill_path(path, clipRect, &additiveBlitter, ir.fTop, ir.fBottom, *cl ipRgn, false);
1264 }
1265
1266 if (isInverse) {
1267 // Currently, we use the old path to render the inverse path,
1268 // so we don't need this.
1269 // sk_blit_below(blitter, ir, *clipRgn);
1270 }
1271 }
1272
1273 // This almost copies SkScan::AntiFillPath
1274 void SkScan::AAAFillPath(const SkPath& path, const SkRasterClip& clip, SkBlitter * blitter) {
1275 if (clip.isEmpty()) {
1276 return;
1277 }
1278
1279 if (clip.isBW()) {
1280 AAAFillPath(path, clip.bwRgn(), blitter);
1281 } else {
1282 SkRegion tmp;
1283 SkAAClipBlitter aaBlitter;
1284
1285 tmp.setRect(clip.getBounds());
1286 aaBlitter.init(blitter, &clip.aaRgn());
1287 AAAFillPath(path, tmp, &aaBlitter);
1288 }
1289 }
OLDNEW
« src/core/SkEdge.cpp ('K') | « src/core/SkScan.h ('k') | src/core/SkScan_AntiPath.cpp » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698