|
|
Chromium Code Reviews|
Created:
6 years, 7 months ago by RyanS Modified:
6 years, 7 months ago CC:
chromium-reviews, jam, alecflett, ericu+idb_chromium.org, darin-cc_chromium.org, cmumford, dgrogan, jsbell+idb_chromium.org Base URL:
svn://svn.chromium.org/chrome/trunk/src Visibility:
Public. |
DescriptionRevert of Add most of the metadata-handling code for blobs. It's not quite all there, but (https://codereview.chromium.org/266333002/)
Reason for revert:
Causing timeouts on indexeddb_perf tests. Log:
http://build.chromium.org/p/chromium.perf/builders/Win%207%20Low-End%20Perf%20%281%29/builds/12354/steps/indexeddb_perf/logs/stdio
Original issue's description:
> Add most of the metadata-handling code for blobs. It's not quite all there, but
> this is the biggest chunk I can pull out vaguely cleanly. It does contain a
> couple of fake calls to keep the compiler happy.
> This CL also makes SetUpMetadata a member in order to ease testing in a later CL.
>
> This depends on https://codereview.chromium.org/261843004/.
>
> BUG=108012
>
> Committed: https://src.chromium.org/viewvc/chrome?view=rev&revision=270016
TBR=cmumford@chromium.org,jsbell@chromium.org,ericu@chromium.org
NOTREECHECKS=true
NOTRY=true
BUG=108012
Patch Set 1 #
Created: 6 years, 7 months ago
(Patch set is too large to download)
Messages
Total messages: 12 (0 generated)
Created Revert of Add most of the metadata-handling code for blobs. It's not quite all there, but
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/rschoen@chromium.org/280703007/1
The CQ bit was unchecked by commit-bot@chromium.org
No LGTM from a valid reviewer yet. Only full committers are accepted. Even if an LGTM may have been provided, it was from a non-committer or a provisional committer, _not_ a full super star committer. See http://www.chromium.org/getting-involved/become-a-committer Note that this has nothing to do with OWNERS files.
lgtm
The CQ bit was unchecked by commit-bot@chromium.org
Failed to apply patch for
content/browser/indexed_db/indexed_db_backing_store.cc:
While running patch -p1 --forward --force --no-backup-if-mismatch;
patching file content/browser/indexed_db/indexed_db_backing_store.cc
Hunk #1 FAILED at 128.
Hunk #2 FAILED at 306.
Hunk #3 FAILED at 344.
Hunk #4 FAILED at 370.
Hunk #5 FAILED at 383.
Hunk #6 FAILED at 414.
Hunk #7 FAILED at 496.
Hunk #8 FAILED at 637.
Hunk #9 FAILED at 713.
Hunk #10 FAILED at 763.
Hunk #11 FAILED at 1049.
Hunk #12 FAILED at 1121.
Hunk #13 FAILED at 1216.
Hunk #14 FAILED at 1294.
Hunk #15 FAILED at 1332.
Hunk #16 FAILED at 1396.
Hunk #17 FAILED at 1427.
Hunk #18 FAILED at 1739.
Hunk #19 FAILED at 1813.
Hunk #20 FAILED at 1909.
Hunk #21 FAILED at 2512.
Hunk #22 FAILED at 2899.
Hunk #23 FAILED at 2915.
Hunk #24 FAILED at 3111.
Hunk #25 FAILED at 3175.
Hunk #26 FAILED at 3231.
Hunk #27 FAILED at 3243.
Hunk #28 FAILED at 3323.
Hunk #29 FAILED at 3346.
Hunk #30 FAILED at 3356.
Hunk #31 FAILED at 3429.
Hunk #32 FAILED at 3437.
Hunk #33 FAILED at 3460.
Hunk #34 FAILED at 3634.
Hunk #35 FAILED at 3661.
Hunk #36 FAILED at 3691.
Hunk #37 FAILED at 3719.
37 out of 37 hunks FAILED -- saving rejects to file
content/browser/indexed_db/indexed_db_backing_store.cc.rej
Patch: content/browser/indexed_db/indexed_db_backing_store.cc
Index: content/browser/indexed_db/indexed_db_backing_store.cc
diff --git a/content/browser/indexed_db/indexed_db_backing_store.cc
b/content/browser/indexed_db/indexed_db_backing_store.cc
index
1cf95619fee41503306b7e72b8b3c8488125eda4..36e514ad2afef6fd7054cb7c13a3887dfad06ce9
100644
--- a/content/browser/indexed_db/indexed_db_backing_store.cc
+++ b/content/browser/indexed_db/indexed_db_backing_store.cc
@@ -128,8 +128,6 @@
CLEAR_OBJECT_STORE,
READ_BLOB_JOURNAL,
DECODE_BLOB_JOURNAL,
- GET_BLOB_KEY_GENERATOR_CURRENT_NUMBER,
- GET_BLOB_INFO_FOR_RECORD,
INTERNAL_ERROR_MAX,
};
@@ -306,8 +304,7 @@
// 0 - Initial version.
// 1 - Adds UserIntVersion to DatabaseMetaData.
// 2 - Adds DataVersion to to global metadata.
-// 3 - Adds metadata needed for blob support.
-static const int64 kLatestKnownSchemaVersion = 3;
+static const int64 kLatestKnownSchemaVersion = 2;
WARN_UNUSED_RESULT static bool IsSchemaKnown(LevelDBDatabase* db, bool* known)
{
int64 db_schema_version = 0;
bool found = false;
@@ -344,16 +341,15 @@
return true;
}
-// TODO(ericu): Move this down into the member section of this file. I'm
-// leaving it here for this CL as it's easier to see the diffs in place.
-WARN_UNUSED_RESULT bool IndexedDBBackingStore::SetUpMetadata() {
+WARN_UNUSED_RESULT static bool SetUpMetadata(
+ LevelDBDatabase* db,
+ const std::string& origin_identifier) {
const uint32 latest_known_data_version =
blink::kSerializedScriptValueVersion;
const std::string schema_version_key = SchemaVersionKey::Encode();
const std::string data_version_key = DataVersionKey::Encode();
- scoped_refptr<LevelDBTransaction> transaction =
- new LevelDBTransaction(db_.get());
+ scoped_refptr<LevelDBTransaction> transaction = new LevelDBTransaction(db);
int64 db_schema_version = 0;
int64 db_data_version = 0;
@@ -370,12 +366,6 @@
PutInt(transaction.get(), schema_version_key, db_schema_version);
db_data_version = latest_known_data_version;
PutInt(transaction.get(), data_version_key, db_data_version);
- // If a blob directory already exists for this database, blow it away.
It's
- // leftover from a partially-purged previous generation of data.
- if (!base::DeleteFile(blob_path_, true)) {
- INTERNAL_WRITE_ERROR_UNTESTED(SET_UP_METADATA);
- return false;
- }
} else {
// Upgrade old backing store.
DCHECK_LE(db_schema_version, kLatestKnownSchemaVersion);
@@ -383,10 +373,10 @@
db_schema_version = 1;
PutInt(transaction.get(), schema_version_key, db_schema_version);
const std::string start_key =
- DatabaseNameKey::EncodeMinKeyForOrigin(origin_identifier_);
+ DatabaseNameKey::EncodeMinKeyForOrigin(origin_identifier);
const std::string stop_key =
- DatabaseNameKey::EncodeStopKeyForOrigin(origin_identifier_);
- scoped_ptr<LevelDBIterator> it = db_->CreateIterator();
+ DatabaseNameKey::EncodeStopKeyForOrigin(origin_identifier);
+ scoped_ptr<LevelDBIterator> it = db->CreateIterator();
for (s = it->Seek(start_key);
s.ok() && it->IsValid() && CompareKeys(it->Key(), stop_key) < 0;
s = it->Next()) {
@@ -414,13 +404,6 @@
db_data_version = blink::kSerializedScriptValueVersion;
PutInt(transaction.get(), data_version_key, db_data_version);
}
- if (db_schema_version < 3) {
- db_schema_version = 3;
- if (!base::DeleteFile(blob_path_, true)) {
- INTERNAL_WRITE_ERROR_UNTESTED(SET_UP_METADATA);
- return false;
- }
- }
}
if (!s.ok()) {
@@ -496,55 +479,6 @@
return LevelDBDatabase::Destroy(file_name);
}
};
-
-static bool GetBlobKeyGeneratorCurrentNumber(
- LevelDBTransaction* leveldb_transaction,
- int64 database_id,
- int64* blob_key_generator_current_number) {
- const std::string key_gen_key = DatabaseMetaDataKey::Encode(
- database_id, DatabaseMetaDataKey::BLOB_KEY_GENERATOR_CURRENT_NUMBER);
-
- // Default to initial number if not found.
- int64 cur_number = DatabaseMetaDataKey::kBlobKeyGeneratorInitialNumber;
- std::string data;
-
- bool found = false;
- bool ok = leveldb_transaction->Get(key_gen_key, &data, &found).ok();
- if (!ok) {
- INTERNAL_READ_ERROR_UNTESTED(GET_BLOB_KEY_GENERATOR_CURRENT_NUMBER);
- return false;
- }
- if (found) {
- StringPiece slice(data);
- if (!DecodeVarInt(&slice, &cur_number) || !slice.empty() ||
- !DatabaseMetaDataKey::IsValidBlobKey(cur_number)) {
- INTERNAL_READ_ERROR_UNTESTED(GET_BLOB_KEY_GENERATOR_CURRENT_NUMBER);
- return false;
- }
- }
- *blob_key_generator_current_number = cur_number;
- return true;
-}
-
-static bool UpdateBlobKeyGeneratorCurrentNumber(
- LevelDBTransaction* leveldb_transaction,
- int64 database_id,
- int64 blob_key_generator_current_number) {
-#ifndef NDEBUG
- int64 old_number;
- if (!GetBlobKeyGeneratorCurrentNumber(
- leveldb_transaction, database_id, &old_number))
- return false;
- DCHECK_LT(old_number, blob_key_generator_current_number);
-#endif
- DCHECK(
- DatabaseMetaDataKey::IsValidBlobKey(blob_key_generator_current_number));
- const std::string key = DatabaseMetaDataKey::Encode(
- database_id, DatabaseMetaDataKey::BLOB_KEY_GENERATOR_CURRENT_NUMBER);
-
- PutVarInt(leveldb_transaction, key, blob_key_generator_current_number);
- return true;
-}
// TODO(ericu): Error recovery. If we persistently can't read the
// blob journal, the safe thing to do is to clear it and leak the blobs,
@@ -637,64 +571,6 @@
EncodeBlobJournal(journal, &data);
leveldb_transaction->Put(key, &data);
return leveldb::Status::OK();
-}
-
-// Blob Data is encoded as a series of:
-// { is_file [bool], key [int64 as varInt],
-// type [string-with-length, may be empty],
-// (for Blobs only) size [int64 as varInt]
-// (for Files only) fileName [string-with-length]
-// }
-// There is no length field; just read until you run out of data.
-static std::string EncodeBlobData(
- const std::vector<IndexedDBBlobInfo*>& blob_info) {
- std::string ret;
- std::vector<IndexedDBBlobInfo*>::const_iterator iter;
- for (iter = blob_info.begin(); iter != blob_info.end(); ++iter) {
- const IndexedDBBlobInfo& info = **iter;
- EncodeBool(info.is_file(), &ret);
- EncodeVarInt(info.key(), &ret);
- EncodeStringWithLength(info.type(), &ret);
- if (info.is_file())
- EncodeStringWithLength(info.file_name(), &ret);
- else
- EncodeVarInt(info.size(), &ret);
- }
- return ret;
-}
-
-static bool DecodeBlobData(const std::string& data,
- std::vector<IndexedDBBlobInfo>* output) {
- std::vector<IndexedDBBlobInfo> ret;
- output->clear();
- StringPiece slice(data);
- while (!slice.empty()) {
- bool is_file;
- int64 key;
- base::string16 type;
- int64 size;
- base::string16 file_name;
-
- if (!DecodeBool(&slice, &is_file))
- return false;
- if (!DecodeVarInt(&slice, &key) ||
- !DatabaseMetaDataKey::IsValidBlobKey(key))
- return false;
- if (!DecodeStringWithLength(&slice, &type))
- return false;
- if (is_file) {
- if (!DecodeStringWithLength(&slice, &file_name))
- return false;
- ret.push_back(IndexedDBBlobInfo(key, type, file_name));
- } else {
- if (!DecodeVarInt(&slice, &size) || size < 0)
- return false;
- ret.push_back(IndexedDBBlobInfo(type, static_cast<uint64>(size), key));
- }
- }
- output->swap(ret);
-
- return true;
}
IndexedDBBackingStore::IndexedDBBackingStore(
@@ -713,8 +589,7 @@
task_runner_(task_runner),
db_(db.Pass()),
comparator_(comparator.Pass()),
- active_blob_registry_(this) {
-}
+ active_blob_registry_(this) {}
IndexedDBBackingStore::~IndexedDBBackingStore() {
if (!blob_path_.empty() && !child_process_ids_granted_.empty()) {
@@ -763,7 +638,6 @@
INDEXED_DB_BACKING_STORE_OPEN_ORIGIN_TOO_LONG,
INDEXED_DB_BACKING_STORE_OPEN_NO_RECOVERY,
INDEXED_DB_BACKING_STORE_OPEN_FAILED_PRIOR_CORRUPTION,
- INDEXED_DB_BACKING_STORE_OPEN_FAILED_CLEANUP_JOURNAL_ERROR,
INDEXED_DB_BACKING_STORE_OPEN_MAX,
};
@@ -1049,22 +923,13 @@
return scoped_refptr<IndexedDBBackingStore>();
}
- scoped_refptr<IndexedDBBackingStore> backing_store =
- Create(indexed_db_factory,
- origin_url,
- blob_path,
- request_context,
- db.Pass(),
- comparator.Pass(),
- task_runner);
-
- if (clean_journal && backing_store &&
- !bac…
(message too large)
The CQ bit was checked by rschoen@chromium.org
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/rschoen@chromium.org/280703007/1
The CQ bit was unchecked by commit-bot@chromium.org
Failed to apply patch for
content/browser/indexed_db/indexed_db_backing_store.cc:
While running patch -p1 --forward --force --no-backup-if-mismatch;
patching file content/browser/indexed_db/indexed_db_backing_store.cc
Hunk #1 FAILED at 128.
Hunk #2 FAILED at 306.
Hunk #3 FAILED at 344.
Hunk #4 FAILED at 370.
Hunk #5 FAILED at 383.
Hunk #6 FAILED at 414.
Hunk #7 FAILED at 496.
Hunk #8 FAILED at 637.
Hunk #9 FAILED at 713.
Hunk #10 FAILED at 763.
Hunk #11 FAILED at 1049.
Hunk #12 FAILED at 1121.
Hunk #13 FAILED at 1216.
Hunk #14 FAILED at 1294.
Hunk #15 FAILED at 1332.
Hunk #16 FAILED at 1396.
Hunk #17 FAILED at 1427.
Hunk #18 FAILED at 1739.
Hunk #19 FAILED at 1813.
Hunk #20 FAILED at 1909.
Hunk #21 FAILED at 2512.
Hunk #22 FAILED at 2899.
Hunk #23 FAILED at 2915.
Hunk #24 FAILED at 3111.
Hunk #25 FAILED at 3175.
Hunk #26 FAILED at 3231.
Hunk #27 FAILED at 3243.
Hunk #28 FAILED at 3323.
Hunk #29 FAILED at 3346.
Hunk #30 FAILED at 3356.
Hunk #31 FAILED at 3429.
Hunk #32 FAILED at 3437.
Hunk #33 FAILED at 3460.
Hunk #34 FAILED at 3634.
Hunk #35 FAILED at 3661.
Hunk #36 FAILED at 3691.
Hunk #37 FAILED at 3719.
37 out of 37 hunks FAILED -- saving rejects to file
content/browser/indexed_db/indexed_db_backing_store.cc.rej
Patch: content/browser/indexed_db/indexed_db_backing_store.cc
Index: content/browser/indexed_db/indexed_db_backing_store.cc
diff --git a/content/browser/indexed_db/indexed_db_backing_store.cc
b/content/browser/indexed_db/indexed_db_backing_store.cc
index
1cf95619fee41503306b7e72b8b3c8488125eda4..36e514ad2afef6fd7054cb7c13a3887dfad06ce9
100644
--- a/content/browser/indexed_db/indexed_db_backing_store.cc
+++ b/content/browser/indexed_db/indexed_db_backing_store.cc
@@ -128,8 +128,6 @@
CLEAR_OBJECT_STORE,
READ_BLOB_JOURNAL,
DECODE_BLOB_JOURNAL,
- GET_BLOB_KEY_GENERATOR_CURRENT_NUMBER,
- GET_BLOB_INFO_FOR_RECORD,
INTERNAL_ERROR_MAX,
};
@@ -306,8 +304,7 @@
// 0 - Initial version.
// 1 - Adds UserIntVersion to DatabaseMetaData.
// 2 - Adds DataVersion to to global metadata.
-// 3 - Adds metadata needed for blob support.
-static const int64 kLatestKnownSchemaVersion = 3;
+static const int64 kLatestKnownSchemaVersion = 2;
WARN_UNUSED_RESULT static bool IsSchemaKnown(LevelDBDatabase* db, bool* known)
{
int64 db_schema_version = 0;
bool found = false;
@@ -344,16 +341,15 @@
return true;
}
-// TODO(ericu): Move this down into the member section of this file. I'm
-// leaving it here for this CL as it's easier to see the diffs in place.
-WARN_UNUSED_RESULT bool IndexedDBBackingStore::SetUpMetadata() {
+WARN_UNUSED_RESULT static bool SetUpMetadata(
+ LevelDBDatabase* db,
+ const std::string& origin_identifier) {
const uint32 latest_known_data_version =
blink::kSerializedScriptValueVersion;
const std::string schema_version_key = SchemaVersionKey::Encode();
const std::string data_version_key = DataVersionKey::Encode();
- scoped_refptr<LevelDBTransaction> transaction =
- new LevelDBTransaction(db_.get());
+ scoped_refptr<LevelDBTransaction> transaction = new LevelDBTransaction(db);
int64 db_schema_version = 0;
int64 db_data_version = 0;
@@ -370,12 +366,6 @@
PutInt(transaction.get(), schema_version_key, db_schema_version);
db_data_version = latest_known_data_version;
PutInt(transaction.get(), data_version_key, db_data_version);
- // If a blob directory already exists for this database, blow it away.
It's
- // leftover from a partially-purged previous generation of data.
- if (!base::DeleteFile(blob_path_, true)) {
- INTERNAL_WRITE_ERROR_UNTESTED(SET_UP_METADATA);
- return false;
- }
} else {
// Upgrade old backing store.
DCHECK_LE(db_schema_version, kLatestKnownSchemaVersion);
@@ -383,10 +373,10 @@
db_schema_version = 1;
PutInt(transaction.get(), schema_version_key, db_schema_version);
const std::string start_key =
- DatabaseNameKey::EncodeMinKeyForOrigin(origin_identifier_);
+ DatabaseNameKey::EncodeMinKeyForOrigin(origin_identifier);
const std::string stop_key =
- DatabaseNameKey::EncodeStopKeyForOrigin(origin_identifier_);
- scoped_ptr<LevelDBIterator> it = db_->CreateIterator();
+ DatabaseNameKey::EncodeStopKeyForOrigin(origin_identifier);
+ scoped_ptr<LevelDBIterator> it = db->CreateIterator();
for (s = it->Seek(start_key);
s.ok() && it->IsValid() && CompareKeys(it->Key(), stop_key) < 0;
s = it->Next()) {
@@ -414,13 +404,6 @@
db_data_version = blink::kSerializedScriptValueVersion;
PutInt(transaction.get(), data_version_key, db_data_version);
}
- if (db_schema_version < 3) {
- db_schema_version = 3;
- if (!base::DeleteFile(blob_path_, true)) {
- INTERNAL_WRITE_ERROR_UNTESTED(SET_UP_METADATA);
- return false;
- }
- }
}
if (!s.ok()) {
@@ -496,55 +479,6 @@
return LevelDBDatabase::Destroy(file_name);
}
};
-
-static bool GetBlobKeyGeneratorCurrentNumber(
- LevelDBTransaction* leveldb_transaction,
- int64 database_id,
- int64* blob_key_generator_current_number) {
- const std::string key_gen_key = DatabaseMetaDataKey::Encode(
- database_id, DatabaseMetaDataKey::BLOB_KEY_GENERATOR_CURRENT_NUMBER);
-
- // Default to initial number if not found.
- int64 cur_number = DatabaseMetaDataKey::kBlobKeyGeneratorInitialNumber;
- std::string data;
-
- bool found = false;
- bool ok = leveldb_transaction->Get(key_gen_key, &data, &found).ok();
- if (!ok) {
- INTERNAL_READ_ERROR_UNTESTED(GET_BLOB_KEY_GENERATOR_CURRENT_NUMBER);
- return false;
- }
- if (found) {
- StringPiece slice(data);
- if (!DecodeVarInt(&slice, &cur_number) || !slice.empty() ||
- !DatabaseMetaDataKey::IsValidBlobKey(cur_number)) {
- INTERNAL_READ_ERROR_UNTESTED(GET_BLOB_KEY_GENERATOR_CURRENT_NUMBER);
- return false;
- }
- }
- *blob_key_generator_current_number = cur_number;
- return true;
-}
-
-static bool UpdateBlobKeyGeneratorCurrentNumber(
- LevelDBTransaction* leveldb_transaction,
- int64 database_id,
- int64 blob_key_generator_current_number) {
-#ifndef NDEBUG
- int64 old_number;
- if (!GetBlobKeyGeneratorCurrentNumber(
- leveldb_transaction, database_id, &old_number))
- return false;
- DCHECK_LT(old_number, blob_key_generator_current_number);
-#endif
- DCHECK(
- DatabaseMetaDataKey::IsValidBlobKey(blob_key_generator_current_number));
- const std::string key = DatabaseMetaDataKey::Encode(
- database_id, DatabaseMetaDataKey::BLOB_KEY_GENERATOR_CURRENT_NUMBER);
-
- PutVarInt(leveldb_transaction, key, blob_key_generator_current_number);
- return true;
-}
// TODO(ericu): Error recovery. If we persistently can't read the
// blob journal, the safe thing to do is to clear it and leak the blobs,
@@ -637,64 +571,6 @@
EncodeBlobJournal(journal, &data);
leveldb_transaction->Put(key, &data);
return leveldb::Status::OK();
-}
-
-// Blob Data is encoded as a series of:
-// { is_file [bool], key [int64 as varInt],
-// type [string-with-length, may be empty],
-// (for Blobs only) size [int64 as varInt]
-// (for Files only) fileName [string-with-length]
-// }
-// There is no length field; just read until you run out of data.
-static std::string EncodeBlobData(
- const std::vector<IndexedDBBlobInfo*>& blob_info) {
- std::string ret;
- std::vector<IndexedDBBlobInfo*>::const_iterator iter;
- for (iter = blob_info.begin(); iter != blob_info.end(); ++iter) {
- const IndexedDBBlobInfo& info = **iter;
- EncodeBool(info.is_file(), &ret);
- EncodeVarInt(info.key(), &ret);
- EncodeStringWithLength(info.type(), &ret);
- if (info.is_file())
- EncodeStringWithLength(info.file_name(), &ret);
- else
- EncodeVarInt(info.size(), &ret);
- }
- return ret;
-}
-
-static bool DecodeBlobData(const std::string& data,
- std::vector<IndexedDBBlobInfo>* output) {
- std::vector<IndexedDBBlobInfo> ret;
- output->clear();
- StringPiece slice(data);
- while (!slice.empty()) {
- bool is_file;
- int64 key;
- base::string16 type;
- int64 size;
- base::string16 file_name;
-
- if (!DecodeBool(&slice, &is_file))
- return false;
- if (!DecodeVarInt(&slice, &key) ||
- !DatabaseMetaDataKey::IsValidBlobKey(key))
- return false;
- if (!DecodeStringWithLength(&slice, &type))
- return false;
- if (is_file) {
- if (!DecodeStringWithLength(&slice, &file_name))
- return false;
- ret.push_back(IndexedDBBlobInfo(key, type, file_name));
- } else {
- if (!DecodeVarInt(&slice, &size) || size < 0)
- return false;
- ret.push_back(IndexedDBBlobInfo(type, static_cast<uint64>(size), key));
- }
- }
- output->swap(ret);
-
- return true;
}
IndexedDBBackingStore::IndexedDBBackingStore(
@@ -713,8 +589,7 @@
task_runner_(task_runner),
db_(db.Pass()),
comparator_(comparator.Pass()),
- active_blob_registry_(this) {
-}
+ active_blob_registry_(this) {}
IndexedDBBackingStore::~IndexedDBBackingStore() {
if (!blob_path_.empty() && !child_process_ids_granted_.empty()) {
@@ -763,7 +638,6 @@
INDEXED_DB_BACKING_STORE_OPEN_ORIGIN_TOO_LONG,
INDEXED_DB_BACKING_STORE_OPEN_NO_RECOVERY,
INDEXED_DB_BACKING_STORE_OPEN_FAILED_PRIOR_CORRUPTION,
- INDEXED_DB_BACKING_STORE_OPEN_FAILED_CLEANUP_JOURNAL_ERROR,
INDEXED_DB_BACKING_STORE_OPEN_MAX,
};
@@ -1049,22 +923,13 @@
return scoped_refptr<IndexedDBBackingStore>();
}
- scoped_refptr<IndexedDBBackingStore> backing_store =
- Create(indexed_db_factory,
- origin_url,
- blob_path,
- request_context,
- db.Pass(),
- comparator.Pass(),
- task_runner);
-
- if (clean_journal && backing_store &&
- !bac…
(message too large)
Already reverted, closing issue. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
