Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(13)

Side by Side Diff: scripts/bucket_relocate.sh

Issue 698893003: Update checked in version of gsutil to version 4.6 (Closed) Base URL: http://dart.googlecode.com/svn/third_party/gsutil/
Patch Set: Created 6 years, 1 month ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch | Annotate | Revision Log
« no previous file with comments | « gsutil.py ('k') | scripts/bucket_relocate_tests.sh » ('j') | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
(Empty)
1 #!/bin/bash
2 # Copyright 2013 Google Inc. All Rights Reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 function Usage {
17 cat << EOF
18 bucket_relocate - relocates buckets in Google Cloud Storage
19
20 This script can be used to migrate one or more buckets to a different
21 location and/or storage class. It operates in two stages: In stage 1, a
22 temporary bucket is created in the new location/storage class corresponding
23 to each bucket being migrated, and data are copied from the original to the
24 new bucket(s). In stage 2 any newly created data are copied from the original to
25 the temporary bucket(s), the original buckets are deleted and recreated in the
26 new location/storage class, data are copied from the temporary to the re-created
27 bucket(s), and the temporary bucket(s) deleted. Because both stages 1 and 2 use
28 copy-in-the-cloud, the run time will primarily be dependent upon the number of
29 objects in the bucket, rather than the total size of the objects. You should
30 ensure that no reads or writes occur to your bucket during the brief period
31 while stage 2 runs.
32
33 To ensure that all data are correctly copied from the source to the temporary
34 bucket, we recommend running stage 1 first, and then comparing the source and
35 temporary buckets by executing:
36
37 gsutil ls -L gs://yourbucket > ls.1
38 gsutil ls -L gs://yourbucket-relocate > ls.2
39 # Use some program that visually highlights diffs, such as:
40 vimdiff ls.1 ls.2
41
42 Starting conditions:
43 You must have at least version 4.0 of bash and version 3.35 of gsutil installed,
44 with credentials (in your.boto config file) that have FULL_CONTROL access to all
45 buckets and objects being migrated. If this script is run using credentials that
46 lack these permissions it will fail part-way through, at which point you will
47 need to change the ACLs of the affected objects and re-run the script. (The
48 script keeps track of what it has completed, so you can re-run it after an
49 interruption or problem.) If you specify the -v option the script will check all
50 permissions before starting the migration (which takes time, because it performs
51 a HEAD on each object as well as a GET on the object's ?acl subresource). If you
52 do use the -v option it's possible the script will find no problems, begin the
53 migration, and then encounter permission problems because of objects that are
54 uploaded after the script begins. If that happens the script will fail part-way
55 through and you will need to change the object ACLs and re-run the script.
56
57 If you need to change ACLs you can do so using a command like:
58
59 gsutil acl ch -u scriptuser@gmail.com:FC gs://bucket/object1 gs://bucket/objec t2 ...
60
61 where scriptuser@agmail.com is the identity for which your credentials are
62 configured.
63
64 Caveats:
65 1) If an object is deleted from the original bucket after it has been processed
66 in stage 1, that object will not be deleted during stage 2.
67 2) If an object is overwritten after it has been processed in stage 1, that
68 change will not be re-copied during stage 2.
69 3) Object change notification configuration is not preserved by this migration
70 script.
71 4) Restored objects in versioned buckets will preserve the version ordering but
72 not version numbers. For example, if the original bucket contained:
73 gs://bucket/obj#1340448460830000 and gs://bucket/obj#1350448460830000
74 the restored bucket might have objects with these versions:
75 gs://bucket/obj#1360448460830000 and gs://bucket/obj#1370448460830000
76 Beware of this caveat if you have code that stores the version-ful name
77 of objects (e.g., in a database).
78 5) Buckets with names longer than 55 characters can not be migrated.
79 This is because the resulting temporary bucket name will be too long (>63
80 characters).
81 6) Since this script stores state in ~/bucketrelo/. Please do not remove this
82 directory until the scripts have completed successfully.
83
84 If your application overwrites or deletes objects, we recommend disabling all
85 writes while running both stages.
86
87 Usage:
88 bucket_relocate.sh STAGE [OPTION]... bucket...
89
90 Examples:
91 bucket_relocate.sh -2 gs://mybucket1 gs://mybucket2
92
93 STAGE
94 The stage determines what stage should be executed:
95 -1 run stage 1 - during this stage users can still add objects to
96 the bucket(s) being migrated.
97 -2 run stage 2 - during this stage no users should add or modify
98 any objects in the bucket(s) being migrated.
99 -A run stage 1 and stage 2 back-to-back - use this option if you
100 are guaranteed that no users will be making changes to the
101 bucket throughout the entire process.
102 Please note that during both stages users should not delete or overwrite
103 objects in the buckets being migrated, because these changes will not be
104 detected.
105
106 OPTIONS
107 -? show this usage information.
108
109 -c <class> sets the storage class of the destination bucket.
110 Example storage classes:
111 S - Standard (default)
112 DRA - Durable Reduced Availability storage.
113
114 -l <location> sets the location of the destination bucket.
115 Example locations:
116 US - United States (default)
117 EU - European Union
118
119 -v Verify that the credentials being used have write access to all
120 buckets being migrated and read access to all objects within
121 those buckets.
122
123 Multiple buckets can be specified if more than one bucket needs to be
124 relocated. This can be done as follows:
125
126 bucket_relocate.sh -A gs://bucket01 gs://bucket02 gs://bucket03
127
128 To relocate all buckets in a given project, you could do the following:
129
130 gsutil ls -p project-id | xargs bucket_relocate.sh -A -c DRA -l EU
131
132 EOF
133 }
134
135 buckets=()
136 tempbuckets=()
137 stage=-1
138 location=''
139 class=''
140 extra_verification=false
141
142 basedir=~/bucketrelo
143 manifest=$basedir/relocate-manifest-
144 steplog=$basedir/relocate-step-
145 debugout=$basedir/relocate-debug-$(date -d "today" +"%Y%m%d%H%M%S").log
146 permcheckout=$basedir/relocate-permcheck-
147 metadefacl=$basedir/relocate-defacl-for-
148 metawebcfg=$basedir/relocate-webcfg-for-
149 metalogging=$basedir/relocate-logging-for-
150 metacors=$basedir/relocate-cors-for-
151 metavers=$basedir/relocate-vers-for-
152 metalifecycle=$basedir/relocate-lifecycle-for-
153
154 # This script requires Bash 4.0 or higher
155 if [ ${BASH_VERSION:0:1} -lt 4 ]; then
156 echo "This script requires bash version 4 or higher." 1>&2;
157 exit 1
158 fi
159
160 # Create the working directory where we store all the temporary state.
161 if [ ! -d $basedir ]; then
162 mkdir $basedir
163 if [ $? -ne 0 ]; then
164 echo "Could not create $basedir."
165 exit 1
166 fi
167 fi
168
169
170 function ParallelIfNoVersioning() {
171 versioning=`$gsutil versioning get $1 | head -1`
172 if [ "$versioning" == '' ]; then
173 EchoErr "Failed to retrieve versioning information for $1"
174 exit 1
175 fi
176 vpos=$((${#src} + 2))
177 versioning=${versioning:vpos}
178 if [ "$versioning" == 'Enabled' ]; then
179 echo "$src has versioning enabled, so we have to copy all objects "\
180 "sequentially, to preserve the object version ordering."
181 parallel_if_no_versioning=""
182 else
183 parallel_if_no_versioning="-m"
184 fi
185 }
186
187 function DeleteBucketWithRetry() {
188 # Add some retries as occasionally the object deletes need to filter
189 # through the system.
190 attempt=0
191 success=false
192 while [ $success == false ]; do
193 result=$(($gsutil -m rm -Ra $1/*) 2>&1)
194 if [ $? -ne 0 ]; then
195 if [[ "$result" != *No\ URIs\ matched* ]]; then
196 EchoErr "Failed to delete the objects from bucket: $1"
197 exit 1
198 fi
199 fi
200 result=$(($gsutil rb $1) 2>&1)
201 if [ $? -ne 0 ]; then
202 if [[ "$result" == *code=BucketNotEmpty* ]]; then
203 attempt=$(( $attempt+1 ))
204 if [ $attempt -gt 30 ]; then
205 EchoErr "Failed to remove the bucket: $1"
206 exit 1
207 else
208 EchoErr "Waiting for buckets to empty."
209 sleep 10s
210 fi
211 else
212 EchoErr "Failed to remove the bucket: $1"
213 exit 1
214 fi
215 else
216 success=true
217 fi
218 done
219 }
220
221 function EchoErr() {
222 # echo the function parameters to stderr.
223 echo "$@" 1>&2;
224 echo "ERROR -- $1" >> $debugout
225 }
226
227 function LastStep() {
228 short_name=${1:5}
229 if [ -f $steplog$short_name ]; then
230 echo `cat $steplog$short_name`
231 else
232 echo 0
233 fi
234 }
235
236 function LogStepStart() {
237 echo $1
238 echo "START -- $1" >> $debugout
239 }
240
241 function LogStepEnd() {
242 # $1 = bucket name, $2 = step number
243 short_name=${1:5}
244 echo $2 > $steplog$short_name
245 echo "END -- $1" >> $debugout
246 }
247
248 function CheckBucketExists() {
249 # Strip out gs://, so can use bucket name as part of filename.
250 bucket=`echo $1 | sed 's/.....//'`
251 # Redirect stderr so we can check for permission denied.
252 $gsutil versioning get $1 &> $basedir/bucketcheck.$bucket
253 if [ $? -eq 0 ]; then
254 result="Exist"
255 else
256 grep -q AccessDenied $basedir/bucketcheck.$bucket
257 if [ $? -eq 0 ]; then
258 result="AccessDenied"
259 else
260 result="NotExist"
261 fi
262 fi
263 cat $basedir/bucketcheck.$bucket >> $debugout
264 rm $basedir/bucketcheck.$bucket
265 echo $result
266 }
267
268 # Parse command line arguments
269 while getopts ":?12Ac:l:v" opt; do
270 case $opt in
271 A)
272 # Using -A will make stage 1 and 2 run back-to-back
273 if [ $stage != -1 ]; then
274 EchoErr "Only a single stage can be set."
275 exit 1
276 fi
277 stage=0
278 ;;
279 1)
280 if [ $stage != -1 ]; then
281 EchoErr "Only a single stage can be set."
282 exit 1
283 fi
284 stage=1
285 ;;
286 2)
287 if [ $stage != -1 ]; then
288 EchoErr "Only a single stage can be set."
289 exit 1
290 fi
291 stage=2
292 ;;
293 c)
294 # Sets the storage class, such as S (for Standard) or DRA (for Durable
295 # Reduced Availability)
296 if [ "$class" != '' ]; then
297 EchoErr "Only a single class can be set."
298 exit 1
299 fi
300 class=$OPTARG
301 ;;
302 l)
303 # Sets the location of the bucket. For example: US or EU
304 if [ "$location" != '' ]; then
305 EchoErr "Only a single location can be set."
306 exit 1
307 fi
308 location=$OPTARG
309 ;;
310 v)
311 extra_verification=true
312 ;;
313 ?)
314 Usage
315 exit 0
316 ;;
317 \?)
318 EchoErr "Invalid option: -$OPTARG"
319 exit 1
320 ;;
321 esac
322 done
323
324 shift $(($OPTIND - 1))
325 while test $# -gt 0; do
326 # Buckets must have the gs:// prefix.
327 if [ ${#1} -lt 6 ] || [ "${1:0:5}" != 'gs://' ]; then
328 EchoErr "$1 is not a supported bucket name. Bucket names must start with gs: //"
329 exit 1
330 fi
331 # Bucket names must be <= 55 characters long
332 max_length=$(( 55 + 5 )) # + 5 for the prefix
333 if [ ${#1} -gt $max_length ]; then
334 EchoErr "The name of the bucket ($1) is too long."
335 exit 1
336 fi
337 buckets=("${buckets[@]}" ${1%/})
338 tempbuckets=("${tempbuckets[@]}" ${1%/}-relocate)
339 shift
340 done
341
342 num_buckets=${#buckets[@]}
343 if [ $num_buckets -le 0 ]; then
344 Usage
345 exit 1
346 fi
347 if [ $stage == -1 ]; then
348 EchoErr "Stage not specified. Please specify either -A (for all), -1, or -2."
349 exit 1
350 fi
351 if [[ "$location" == '' ]]; then
352 location='US'
353 fi
354 if [[ "$class" == '' ]]; then
355 class='S'
356 fi
357
358 # Display a summary of the options
359 if [ $stage == 0 ]; then
360 echo "Stage: All stages"
361 else
362 echo "Stage: $stage"
363 fi
364 echo "Location: $location"
365 echo "Storage class: $class"
366 echo "Bucket(s): ${buckets[@]}"
367
368 # Check for prerequisites
369 # 1) Check to see if gsutil is installed
370 gsutil=`which gsutil`
371 if [ "$gsutil" == '' ]; then
372 EchoErr "gsutil was not found. Please install it from https://developers.googl e.com/storage/docs/gsutil_install"
373 exit 1
374 fi
375
376 # 2) Check if gsutil is configured correctly by attempting to list up through
377 # the first bucket from a gsutil ls. We can safely assume there is at least
378 # one bucket otherwise we would not be running this script. Redirect stderr
379 # to /dev/null so if user has a large number of buckets a Broken Pipe error
380 # isn't output.
381 test_bucket=`$gsutil ls 2> /dev/null | head -1`
382 if [ "$test_bucket" == '' ]; then
383 EchoErr "gsutil does not seem to be configured. Please run gsutil config."
384 exit 1
385 fi
386
387 # 3) Checking gsutil version
388 gsutil_version=`$gsutil version`
389 if [ $? -ne 0 ]; then
390 EchoErr "Failed to get version information for gsutil."
391 exit 1
392 fi
393 major=${gsutil_version:15:1}
394 minor=${gsutil_version:17:2}
395 if [ $major -lt 3 ] || ( [ $major -eq 3 ] && [ $minor -lt 35 ] ); then
396 EchoErr "Incorrect version of gsutil. Need 3.35 or greater. Have: $gsutil_vers ion"
397 exit 1
398 fi
399
400 function Stage1 {
401 echo 'Now executing stage 1...'
402
403 # For each bucket, do some verifications:
404 for i in ${!buckets[*]}; do
405 bucket=${buckets[$i]}
406 src=$bucket
407
408 # Verify that the source bucket exists.
409 if [ `LastStep "$src"` -eq 0 ]; then
410 LogStepStart "Step 1: ($src) - Verify the bucket exists."
411 result=`CheckBucketExists $src`
412 if [ "$result" == "AccessDenied" ]; then
413 EchoErr "Validation check failed: The account running this script does n ot have permission to access bucket $bucket"
414 exit 1
415 elif [ "$result" == "NotExist" ]; then
416 EchoErr "Validation check failed: The specified bucket does not exist: $ bucket"
417 exit 1
418 fi
419 LogStepEnd $src 1
420 fi
421
422 # Verify that we can read all the objects.
423 if [ `LastStep "$src"` -eq 1 ]; then
424 if $extra_verification ; then
425 LogStepStart "Step 2: ($src) - Check object permissions. This may take a while..."
426 # The following will attempt to HEAD each object in the bucket, to
427 # ensure the credentials running this script have read access to all dat a
428 # being migrated.
429 short_name=${src:5}
430 $gsutil ls -L $src/** &> $permcheckout$short_name
431 grep -q 'ACCESS DENIED' $permcheckout$short_name
432 if [ $? -eq 0 ]; then
433 EchoErr "Validation failed: Access denied reading an object from $src. "
434 EchoErr "Check the log file ($permcheckout$short_name) for more detail s."
435 exit 1
436 fi
437 LogStepEnd $src 2
438 else
439 LogStepStart "Step 2: ($src) - Skipping object permissions check."
440 LogStepEnd $src 2
441 fi
442 fi
443
444 # Verify WRITE access to the bucket.
445 if [ `LastStep "$src"` -eq 2 ]; then
446 LogStepStart "Step 3: ($src) - Checking write permissions."
447 random_name="relocate_check_`cat /dev/urandom |\
448 LANG=C tr -dc 'a-zA-Z' | head -c 60`"
449 echo 'relocate access check' | gsutil cp - $src/$random_name &>> $debugout
450 if [ $? -ne 0 ]; then
451 EchoErr "Validation check failed: Access denied writing to $src."
452 exit 1
453 fi
454
455 # Remove the temporary file.
456 gsutil rm -a $src/$random_name &>> $debugout
457 if [ $? -ne 0 ]; then
458 EchoErr "Validation failed: Could not delete temporary object: $src/$ran dom_name"
459 EchoErr "Check the log file ($debugout) for more details."
460 exit 1
461 fi
462 LogStepEnd $src 3
463 fi
464 done
465
466 # For each bucket, do the processing...
467 for i in ${!buckets[*]}; do
468 src=${buckets[$i]}
469 dst=${tempbuckets[$i]}
470 bman=$manifest${src:5} # The manifest contains the short name of the bucket
471
472 # verify that the bucket does not yet exist and create it in the
473 # correct location with the correct storage class
474 if [ `LastStep "$src"` -eq 3 ]; then
475 LogStepStart "Step 4: ($src) - Create a temporary bucket ($dst)."
476 dst_exists=`CheckBucketExists $dst`
477 if [ "$dst_exists" == "Exist" ]; then
478 EchoErr "The bucket $dst already exists."
479 exit 1
480 else
481 $gsutil mb -l $location -c $class $dst
482 if [ $? -ne 0 ]; then
483 EchoErr "Failed to create the bucket: $dst"
484 exit 1
485 fi
486 fi
487 LogStepEnd $src 4
488 fi
489
490 if [ `LastStep "$src"` -eq 4 ]; then
491 # If the source has versioning, so should the temporary bucket.
492 LogStepStart "Step 5: ($src) - Turn on versioning on the temporary bucket (if needed)."
493 versioning=`$gsutil versioning get $src | head -1`
494 if [ "$versioning" == '' ]; then
495 EchoErr "Failed to retrieve versioning information for $src"
496 exit 1
497 fi
498 vpos=$((${#src} + 2))
499 versioning=${versioning:vpos}
500 if [ "$versioning" == 'Enabled' ]; then
501 # We need to turn this on when we are copying versioned objects.
502 $gsutil versioning set on $dst
503 if [ $? -ne 0 ]; then
504 EchoErr "Failed to turn on versioning on the temporary bucket: $dst"
505 exit 1
506 fi
507 fi
508 LogStepEnd $src 5
509 fi
510
511 # Copy the objects from the source bucket to the temp bucket
512 if [ `LastStep "$src"` -eq 5 ]; then
513 LogStepStart "Step 6: ($src) - Copy objects from source to the temporary b ucket ($dst) in the cloud."
514 ParallelIfNoVersioning $src
515 $gsutil $parallel_if_no_versioning cp -R -p -L $bman $src/* $dst/
516 if [ $? -ne 0 ]; then
517 EchoErr "Failed to copy objects from $src to $dst."
518 exit 1
519 fi
520 LogStepEnd $src 6
521 fi
522
523 # Backup the metadata for the bucket
524 if [ `LastStep "$src"` -eq 6 ]; then
525 short_name=${src:5}
526 LogStepStart "Step 7: ($src) - Backup the bucket metadata."
527 $gsutil defacl get $src > $metadefacl$short_name
528 if [ $? -ne 0 ]; then
529 EchoErr "Failed to backup the default ACL configuration for $src"
530 exit 1
531 fi
532 $gsutil web get $src > $metawebcfg$short_name
533 if [ $? -ne 0 ]; then
534 EchoErr "Failed to backup the web configuration for $src"
535 exit 1
536 fi
537 $gsutil logging get $src > $metalogging$short_name
538 if [ $? -ne 0 ]; then
539 EchoErr "Failed to backup the logging configuration for $src"
540 exit 1
541 fi
542 $gsutil cors get $src > $metacors$short_name
543 if [ $? -ne 0 ]; then
544 EchoErr "Failed to backup the CORS configuration for $src"
545 exit 1
546 fi
547 $gsutil versioning get $src > $metavers$short_name
548 if [ $? -ne 0 ]; then
549 EchoErr "Failed to backup the versioning configuration for $src"
550 exit 1
551 fi
552 versioning=`cat $metavers$short_name | head -1`
553 $gsutil lifecycle get $src > $metalifecycle$short_name
554 if [ $? -ne 0 ]; then
555 EchoErr "Failed to backup the lifecycle configuration for $src"
556 exit 1
557 fi
558 LogStepEnd $src 7
559 fi
560
561
562 done
563
564 if [ $stage == 1 ]; then
565 # Only show this message if we are not running both stages back-to-back.
566 echo 'Stage 1 complete. Please ensure no reads or writes are occurring to yo ur bucket(s) and then run stage 2.'
567 echo 'At this point, you can verify that the objects were correctly copied b y doing:'
568 echo ' gsutil ls -L gs://yourbucket > ls.1'
569 echo ' gsutil ls -L gs://yourbucket-relocate > ls.2'
570 echo ' # Use some program that visually highlights diffs:'
571 echo ' vimdiff ls.1 ls.2'
572 fi
573 }
574
575 function Stage2 {
576 echo 'Now executing stage 2...'
577
578 # Make sure all the buckets are at least at step #5
579 for i in ${!buckets[*]}; do
580 src=${buckets[$i]}
581 dst=${tempbuckets[$i]}
582
583 if [ `LastStep "$src"` -lt 7 ]; then
584 EchoErr "Relocation for bucket $src did not complete stage 1. Please rerun stage 1 for this bucket."
585 exit 1
586 fi
587 done
588
589 # For each bucket, do the processing...
590 for i in ${!buckets[*]}; do
591 src=${buckets[$i]}
592 dst=${tempbuckets[$i]}
593 bman=$manifest${src:5}
594
595 # Catch up with any new files.
596 if [ `LastStep "$src"` -eq 7 ]; then
597 LogStepStart "Step 8: ($src) - Catch up any new objects that weren't copie d."
598 ParallelIfNoVersioning $src
599 $gsutil $parallel_if_no_versioning cp -R -p -L $bman $src/* $dst/
600 if [ $? -ne 0 ]; then
601 EchoErr "Failed to copy any new objects from $src to $dst"
602 exit 1
603 fi
604 LogStepEnd $src 8
605 fi
606
607 # Remove the old src bucket
608 if [ `LastStep "$src"` -eq 8 ]; then
609 LogStepStart "Step 9: ($src) - Delete the source bucket and objects."
610 DeleteBucketWithRetry $src
611 LogStepEnd $src 9
612 fi
613
614 if [ `LastStep "$src"` -eq 9 ]; then
615 LogStepStart "Step 10: ($src) - Recreate the original bucket."
616 $gsutil mb -l $location -c $class $src
617 if [ $? -ne 0 ]; then
618 EchoErr "Failed to recreate the bucket: $src"
619 exit 1
620 fi
621 LogStepEnd $src 10
622 fi
623
624 if [ `LastStep "$src"` -eq 10 ]; then
625 short_name=${src:5}
626 LogStepStart "Step 11: ($src) - Restore the bucket metadata."
627
628 # defacl
629 $gsutil defacl set $metadefacl$short_name $src
630 if [ $? -ne 0 ]; then
631 EchoErr "Failed to set the default ACL configuration on $src"
632 exit 1
633 fi
634
635 # webcfg
636 page_suffix=`cat $metawebcfg$short_name |\
637 grep -o "<MainPageSuffix>.*</MainPageSuffix>" |\
638 sed -e 's/<MainPageSuffix>//g' -e 's/<\/MainPageSuffix>//g'`
639 if [ "$page_suffix" != '' ]; then page_suffix="-m $page_suffix"; fi
640 error_page=`cat $metawebcfg$short_name |\
641 grep -o "<NotFoundPage>.*</NotFoundPage>" |\
642 sed -e 's/<NotFoundPage>//g' -e 's/<\/NotFoundPage>//g'`
643 if [ "$error_page" != '' ]; then error_page="-e $error_page"; fi
644 $gsutil web set $page_suffix $error_page $src
645 if [ $? -ne 0 ]; then
646 EchoErr "Failed to set the website configuration on $src"
647 exit 1
648 fi
649
650 # logging
651 log_bucket=`cat $metalogging$short_name |\
652 grep -o "<LogBucket>.*</LogBucket>" |\
653 sed -e 's/<LogBucket>//g' -e 's/<\/LogBucket>//g'`
654 if [ "$log_bucket" != '' ]; then log_bucket="-b gs://$log_bucket"; fi
655 log_prefix=`cat $metalogging$short_name |\
656 grep -o "<LogObjectPrefix>.*</LogObjectPrefix>" |\
657 sed -e 's/<LogObjectPrefix>//g' -e 's/<\/LogObjectPrefix>//g'`
658 if [ "$log_prefix" != '' ]; then log_prefix="-o $log_prefix"; fi
659 if [ "$log_prefix" != '' ] && [ "$log_bucket" != '' ]; then
660 $gsutil logging set on $log_bucket $log_prefix $src
661 if [ $? -ne 0 ]; then
662 EchoErr "Failed to set the logging configuration on $src"
663 exit 1
664 fi
665 fi
666
667 # cors
668 $gsutil cors set $metacors$short_name $src
669 if [ $? -ne 0 ]; then
670 EchoErr "Failed to set the CORS configuration on $src"
671 exit 1
672 fi
673
674 # versioning
675 versioning=`cat $metavers$short_name | head -1`
676 vpos=$((${#src} + 2))
677 versioning=${versioning:vpos}
678 if [ "$versioning" == 'Enabled' ]; then
679 $gsutil versioning set on $src
680 if [ $? -ne 0 ]; then
681 EchoErr "Failed to set the versioning configuration on $src"
682 exit 1
683 fi
684 fi
685
686 # lifecycle
687 $gsutil lifecycle set $metalifecycle$short_name $src
688 if [ $? -ne 0 ]; then
689 EchoErr "Failed to set the lifecycle configuration on $src"
690 exit 1
691 fi
692
693 LogStepEnd $src 11
694 fi
695
696 if [ `LastStep "$src"` -eq 11 ]; then
697 LogStepStart "Step 12: ($src) - Copy all objects back to the original buck et (copy in the cloud)."
698 ParallelIfNoVersioning $src
699 $gsutil $parallel_if_no_versioning cp -Rp $dst/* $src/
700 if [ $? -ne 0 ]; then
701 EchoErr "Failed to copy the objects back to the original bucket: $src"
702 exit 1
703 fi
704 LogStepEnd $src 12
705 fi
706
707 if [ `LastStep "$src"` -eq 12 ]; then
708 LogStepStart "Step 13: ($src) - Delete the temporary bucket ($dst)."
709 DeleteBucketWithRetry $dst
710 LogStepEnd $src 13
711 fi
712 done
713
714 # Cleanup for each bucket
715 for i in ${!buckets[*]}; do
716 src=${buckets[$i]}
717 dst=${tempbuckets[$i]}
718
719 if [ `LastStep "$src"` -eq 13 ]; then
720 LogStepStart "Step 14: ($src) - Cleanup."
721 ssrc=${src:5} # short src
722 mv $manifest$ssrc $manifest$ssrc.DONE
723 mv $steplog$ssrc $steplog$ssrc.DONE
724 if [ -f $permcheckout$ssrc ]; then
725 mv $permcheckout$ssrc $permcheckout$ssrc.DONE
726 fi
727 mv $metadefacl$ssrc $metadefacl$ssrc.DONE
728 mv $metawebcfg$ssrc $metawebcfg$ssrc.DONE
729 mv $metalogging$ssrc $metalogging$ssrc.DONE
730 mv $metacors$ssrc $metacors$ssrc.DONE
731 mv $metavers$ssrc $metavers$ssrc.DONE
732 fi
733
734 LogStepStart "($src): Completed."
735 done
736
737 mv $debugout $debugout.DONE
738 }
739
740 if [ $stage == 0 ]; then
741 Stage1
742 Stage2
743 elif [ $stage == 1 ]; then
744 Stage1
745 elif [ $stage == 2 ]; then
746 Stage2
747 fi
748
749
OLDNEW
« no previous file with comments | « gsutil.py ('k') | scripts/bucket_relocate_tests.sh » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698