| OLD | NEW |
| 1 // This is a generated file (see the discoveryapis_generator project). | 1 // This is a generated file (see the discoveryapis_generator project). |
| 2 | 2 |
| 3 library googleapis.bigquery.v2; | 3 library googleapis.bigquery.v2; |
| 4 | 4 |
| 5 import 'dart:core' as core; | 5 import 'dart:core' as core; |
| 6 import 'dart:collection' as collection; | 6 import 'dart:collection' as collection; |
| 7 import 'dart:async' as async; | 7 import 'dart:async' as async; |
| 8 import 'dart:convert' as convert; | 8 import 'dart:convert' as convert; |
| 9 | 9 |
| 10 import 'package:_discoveryapis_commons/_discoveryapis_commons.dart' as commons; | 10 import 'package:_discoveryapis_commons/_discoveryapis_commons.dart' as commons; |
| (...skipping 2870 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 2881 */ | 2881 */ |
| 2882 core.int skipLeadingRows; | 2882 core.int skipLeadingRows; |
| 2883 /** | 2883 /** |
| 2884 * [Optional] The format of the data files. For CSV files, specify "CSV". For | 2884 * [Optional] The format of the data files. For CSV files, specify "CSV". For |
| 2885 * datastore backups, specify "DATASTORE_BACKUP". For newline-delimited JSON, | 2885 * datastore backups, specify "DATASTORE_BACKUP". For newline-delimited JSON, |
| 2886 * specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". The default | 2886 * specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". The default |
| 2887 * value is CSV. | 2887 * value is CSV. |
| 2888 */ | 2888 */ |
| 2889 core.String sourceFormat; | 2889 core.String sourceFormat; |
| 2890 /** | 2890 /** |
| 2891 * [Required] The fully-qualified URIs that point to your data in Google Cloud | 2891 * [Required] The fully-qualified URIs that point to your data in Google |
| 2892 * Storage. Each URI can contain one '*' wildcard character and it must come | 2892 * Cloud. For Google Cloud Storage URIs: Each URI can contain one '*' wildcard |
| 2893 * after the 'bucket' name. | 2893 * character and it must come after the 'bucket' name. Size limits related to |
| 2894 * load jobs apply to external data sources. For Google Cloud Bigtable URIs: |
| 2895 * Exactly one URI can be specified and it has be a fully specified and valid |
| 2896 * HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore |
| 2897 * backups: Exactly one URI can be specified, and it must end with |
| 2898 * '.backup_info'. Also, the '*' wildcard character is not allowed. |
| 2894 */ | 2899 */ |
| 2895 core.List<core.String> sourceUris; | 2900 core.List<core.String> sourceUris; |
| 2896 /** | 2901 /** |
| 2897 * [Optional] Specifies the action that occurs if the destination table | 2902 * [Optional] Specifies the action that occurs if the destination table |
| 2898 * already exists. The following values are supported: WRITE_TRUNCATE: If the | 2903 * already exists. The following values are supported: WRITE_TRUNCATE: If the |
| 2899 * table already exists, BigQuery overwrites the table data. WRITE_APPEND: If | 2904 * table already exists, BigQuery overwrites the table data. WRITE_APPEND: If |
| 2900 * the table already exists, BigQuery appends the data to the table. | 2905 * the table already exists, BigQuery appends the data to the table. |
| 2901 * WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' | 2906 * WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' |
| 2902 * error is returned in the job result. The default value is WRITE_APPEND. | 2907 * error is returned in the job result. The default value is WRITE_APPEND. |
| 2903 * Each action is atomic and only occurs if BigQuery is able to complete the | 2908 * Each action is atomic and only occurs if BigQuery is able to complete the |
| (...skipping 218 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 3122 * [Optional] If querying an external data source outside of BigQuery, | 3127 * [Optional] If querying an external data source outside of BigQuery, |
| 3123 * describes the data format, location and other properties of the data | 3128 * describes the data format, location and other properties of the data |
| 3124 * source. By defining these properties, the data source can then be queried | 3129 * source. By defining these properties, the data source can then be queried |
| 3125 * as if it were a standard BigQuery table. | 3130 * as if it were a standard BigQuery table. |
| 3126 */ | 3131 */ |
| 3127 core.Map<core.String, ExternalDataConfiguration> tableDefinitions; | 3132 core.Map<core.String, ExternalDataConfiguration> tableDefinitions; |
| 3128 /** | 3133 /** |
| 3129 * Specifies whether to use BigQuery's legacy SQL dialect for this query. The | 3134 * Specifies whether to use BigQuery's legacy SQL dialect for this query. The |
| 3130 * default value is true. If set to false, the query will use BigQuery's | 3135 * default value is true. If set to false, the query will use BigQuery's |
| 3131 * standard SQL: https://cloud.google.com/bigquery/sql-reference/ When | 3136 * standard SQL: https://cloud.google.com/bigquery/sql-reference/ When |
| 3132 * useLegacySql is set to false, the values of allowLargeResults and | 3137 * useLegacySql is set to false, the value of flattenResults is ignored; query |
| 3133 * flattenResults are ignored; query will be run as if allowLargeResults is | 3138 * will be run as if flattenResults is false. |
| 3134 * true and flattenResults is false. | |
| 3135 */ | 3139 */ |
| 3136 core.bool useLegacySql; | 3140 core.bool useLegacySql; |
| 3137 /** | 3141 /** |
| 3138 * [Optional] Whether to look for the result in the query cache. The query | 3142 * [Optional] Whether to look for the result in the query cache. The query |
| 3139 * cache is a best-effort cache that will be flushed whenever tables in the | 3143 * cache is a best-effort cache that will be flushed whenever tables in the |
| 3140 * query are modified. Moreover, the query cache is only available when a | 3144 * query are modified. Moreover, the query cache is only available when a |
| 3141 * query does not have a destination table specified. The default value is | 3145 * query does not have a destination table specified. The default value is |
| 3142 * true. | 3146 * true. |
| 3143 */ | 3147 */ |
| 3144 core.bool useQueryCache; | 3148 core.bool useQueryCache; |
| 3145 /** Describes user-defined function resources used in the query. */ | 3149 /** Describes user-defined function resources used in the query. */ |
| 3146 core.List<UserDefinedFunctionResource> userDefinedFunctionResources; | 3150 core.List<UserDefinedFunctionResource> userDefinedFunctionResources; |
| 3147 /** | 3151 /** |
| 3148 * [Optional] Specifies the action that occurs if the destination table | 3152 * [Optional] Specifies the action that occurs if the destination table |
| 3149 * already exists. The following values are supported: WRITE_TRUNCATE: If the | 3153 * already exists. The following values are supported: WRITE_TRUNCATE: If the |
| 3150 * table already exists, BigQuery overwrites the table data. WRITE_APPEND: If | 3154 * table already exists, BigQuery overwrites the table data and uses the |
| 3151 * the table already exists, BigQuery appends the data to the table. | 3155 * schema from the query result. WRITE_APPEND: If the table already exists, |
| 3152 * WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' | 3156 * BigQuery appends the data to the table. WRITE_EMPTY: If the table already |
| 3153 * error is returned in the job result. The default value is WRITE_EMPTY. Each | 3157 * exists and contains data, a 'duplicate' error is returned in the job |
| 3154 * action is atomic and only occurs if BigQuery is able to complete the job | 3158 * result. The default value is WRITE_EMPTY. Each action is atomic and only |
| 3155 * successfully. Creation, truncation and append actions occur as one atomic | 3159 * occurs if BigQuery is able to complete the job successfully. Creation, |
| 3156 * update upon job completion. | 3160 * truncation and append actions occur as one atomic update upon job |
| 3161 * completion. |
| 3157 */ | 3162 */ |
| 3158 core.String writeDisposition; | 3163 core.String writeDisposition; |
| 3159 | 3164 |
| 3160 JobConfigurationQuery(); | 3165 JobConfigurationQuery(); |
| 3161 | 3166 |
| 3162 JobConfigurationQuery.fromJson(core.Map _json) { | 3167 JobConfigurationQuery.fromJson(core.Map _json) { |
| 3163 if (_json.containsKey("allowLargeResults")) { | 3168 if (_json.containsKey("allowLargeResults")) { |
| 3164 allowLargeResults = _json["allowLargeResults"]; | 3169 allowLargeResults = _json["allowLargeResults"]; |
| 3165 } | 3170 } |
| 3166 if (_json.containsKey("createDisposition")) { | 3171 if (_json.containsKey("createDisposition")) { |
| (...skipping 531 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 3698 _json["totalBytesProcessed"] = totalBytesProcessed; | 3703 _json["totalBytesProcessed"] = totalBytesProcessed; |
| 3699 } | 3704 } |
| 3700 if (undeclaredQueryParameters != null) { | 3705 if (undeclaredQueryParameters != null) { |
| 3701 _json["undeclaredQueryParameters"] = undeclaredQueryParameters.map((value)
=> (value).toJson()).toList(); | 3706 _json["undeclaredQueryParameters"] = undeclaredQueryParameters.map((value)
=> (value).toJson()).toList(); |
| 3702 } | 3707 } |
| 3703 return _json; | 3708 return _json; |
| 3704 } | 3709 } |
| 3705 } | 3710 } |
| 3706 | 3711 |
| 3707 class JobStatistics3 { | 3712 class JobStatistics3 { |
| 3713 /** |
| 3714 * [Output-only] The number of bad records encountered. Note that if the job |
| 3715 * has failed because of more bad records encountered than the maximum allowed |
| 3716 * in the load job configuration, then this number can be less than the total |
| 3717 * number of bad records present in the input data. |
| 3718 */ |
| 3719 core.String badRecords; |
| 3708 /** [Output-only] Number of bytes of source data in a load job. */ | 3720 /** [Output-only] Number of bytes of source data in a load job. */ |
| 3709 core.String inputFileBytes; | 3721 core.String inputFileBytes; |
| 3710 /** [Output-only] Number of source files in a load job. */ | 3722 /** [Output-only] Number of source files in a load job. */ |
| 3711 core.String inputFiles; | 3723 core.String inputFiles; |
| 3712 /** | 3724 /** |
| 3713 * [Output-only] Size of the loaded data in bytes. Note that while a load job | 3725 * [Output-only] Size of the loaded data in bytes. Note that while a load job |
| 3714 * is in the running state, this value may change. | 3726 * is in the running state, this value may change. |
| 3715 */ | 3727 */ |
| 3716 core.String outputBytes; | 3728 core.String outputBytes; |
| 3717 /** | 3729 /** |
| 3718 * [Output-only] Number of rows imported in a load job. Note that while an | 3730 * [Output-only] Number of rows imported in a load job. Note that while an |
| 3719 * import job is in the running state, this value may change. | 3731 * import job is in the running state, this value may change. |
| 3720 */ | 3732 */ |
| 3721 core.String outputRows; | 3733 core.String outputRows; |
| 3722 | 3734 |
| 3723 JobStatistics3(); | 3735 JobStatistics3(); |
| 3724 | 3736 |
| 3725 JobStatistics3.fromJson(core.Map _json) { | 3737 JobStatistics3.fromJson(core.Map _json) { |
| 3738 if (_json.containsKey("badRecords")) { |
| 3739 badRecords = _json["badRecords"]; |
| 3740 } |
| 3726 if (_json.containsKey("inputFileBytes")) { | 3741 if (_json.containsKey("inputFileBytes")) { |
| 3727 inputFileBytes = _json["inputFileBytes"]; | 3742 inputFileBytes = _json["inputFileBytes"]; |
| 3728 } | 3743 } |
| 3729 if (_json.containsKey("inputFiles")) { | 3744 if (_json.containsKey("inputFiles")) { |
| 3730 inputFiles = _json["inputFiles"]; | 3745 inputFiles = _json["inputFiles"]; |
| 3731 } | 3746 } |
| 3732 if (_json.containsKey("outputBytes")) { | 3747 if (_json.containsKey("outputBytes")) { |
| 3733 outputBytes = _json["outputBytes"]; | 3748 outputBytes = _json["outputBytes"]; |
| 3734 } | 3749 } |
| 3735 if (_json.containsKey("outputRows")) { | 3750 if (_json.containsKey("outputRows")) { |
| 3736 outputRows = _json["outputRows"]; | 3751 outputRows = _json["outputRows"]; |
| 3737 } | 3752 } |
| 3738 } | 3753 } |
| 3739 | 3754 |
| 3740 core.Map<core.String, core.Object> toJson() { | 3755 core.Map<core.String, core.Object> toJson() { |
| 3741 final core.Map<core.String, core.Object> _json = new core.Map<core.String, c
ore.Object>(); | 3756 final core.Map<core.String, core.Object> _json = new core.Map<core.String, c
ore.Object>(); |
| 3757 if (badRecords != null) { |
| 3758 _json["badRecords"] = badRecords; |
| 3759 } |
| 3742 if (inputFileBytes != null) { | 3760 if (inputFileBytes != null) { |
| 3743 _json["inputFileBytes"] = inputFileBytes; | 3761 _json["inputFileBytes"] = inputFileBytes; |
| 3744 } | 3762 } |
| 3745 if (inputFiles != null) { | 3763 if (inputFiles != null) { |
| 3746 _json["inputFiles"] = inputFiles; | 3764 _json["inputFiles"] = inputFiles; |
| 3747 } | 3765 } |
| 3748 if (outputBytes != null) { | 3766 if (outputBytes != null) { |
| 3749 _json["outputBytes"] = outputBytes; | 3767 _json["outputBytes"] = outputBytes; |
| 3750 } | 3768 } |
| 3751 if (outputRows != null) { | 3769 if (outputRows != null) { |
| (...skipping 443 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 4195 * timeout value, the call returns without any results and with the | 4213 * timeout value, the call returns without any results and with the |
| 4196 * 'jobComplete' flag set to false. You can call GetQueryResults() to wait for | 4214 * 'jobComplete' flag set to false. You can call GetQueryResults() to wait for |
| 4197 * the query to complete and read the results. The default value is 10000 | 4215 * the query to complete and read the results. The default value is 10000 |
| 4198 * milliseconds (10 seconds). | 4216 * milliseconds (10 seconds). |
| 4199 */ | 4217 */ |
| 4200 core.int timeoutMs; | 4218 core.int timeoutMs; |
| 4201 /** | 4219 /** |
| 4202 * Specifies whether to use BigQuery's legacy SQL dialect for this query. The | 4220 * Specifies whether to use BigQuery's legacy SQL dialect for this query. The |
| 4203 * default value is true. If set to false, the query will use BigQuery's | 4221 * default value is true. If set to false, the query will use BigQuery's |
| 4204 * standard SQL: https://cloud.google.com/bigquery/sql-reference/ When | 4222 * standard SQL: https://cloud.google.com/bigquery/sql-reference/ When |
| 4205 * useLegacySql is set to false, the values of allowLargeResults and | 4223 * useLegacySql is set to false, the value of flattenResults is ignored; query |
| 4206 * flattenResults are ignored; query will be run as if allowLargeResults is | 4224 * will be run as if flattenResults is false. |
| 4207 * true and flattenResults is false. | |
| 4208 */ | 4225 */ |
| 4209 core.bool useLegacySql; | 4226 core.bool useLegacySql; |
| 4210 /** | 4227 /** |
| 4211 * [Optional] Whether to look for the result in the query cache. The query | 4228 * [Optional] Whether to look for the result in the query cache. The query |
| 4212 * cache is a best-effort cache that will be flushed whenever tables in the | 4229 * cache is a best-effort cache that will be flushed whenever tables in the |
| 4213 * query are modified. The default value is true. | 4230 * query are modified. The default value is true. |
| 4214 */ | 4231 */ |
| 4215 core.bool useQueryCache; | 4232 core.bool useQueryCache; |
| 4216 | 4233 |
| 4217 QueryRequest(); | 4234 QueryRequest(); |
| (...skipping 1142 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 5360 } | 5377 } |
| 5361 if (useLegacySql != null) { | 5378 if (useLegacySql != null) { |
| 5362 _json["useLegacySql"] = useLegacySql; | 5379 _json["useLegacySql"] = useLegacySql; |
| 5363 } | 5380 } |
| 5364 if (userDefinedFunctionResources != null) { | 5381 if (userDefinedFunctionResources != null) { |
| 5365 _json["userDefinedFunctionResources"] = userDefinedFunctionResources.map((
value) => (value).toJson()).toList(); | 5382 _json["userDefinedFunctionResources"] = userDefinedFunctionResources.map((
value) => (value).toJson()).toList(); |
| 5366 } | 5383 } |
| 5367 return _json; | 5384 return _json; |
| 5368 } | 5385 } |
| 5369 } | 5386 } |
| OLD | NEW |