adbc_driver_bigquery
¶
Low-Level API¶
Low-level ADBC bindings for the BigQuery driver.
- class adbc_driver_bigquery.DatabaseOptions(*values)¶
Bases:
Enum
Database options specific to the BigQuery driver.
- AUTH_CLIENT_ID = 'adbc.bigquery.sql.auth.client_id'¶
Specify the client ID, client secret and refresh_token to use for bigquery connection if AUTH_TYPE is AUTH_VALUE_USER_AUTHENTICATION
- AUTH_CLIENT_SECRET = 'adbc.bigquery.sql.auth.client_secret'¶
- AUTH_CREDENTIALS = 'adbc.bigquery.sql.auth_credentials'¶
Specify the value for credentials It should be the path to the JSON credentials file if AUTH_TYPE is AUTH_VALUE_JSON_CREDENTIAL_FILE
or, it should be the encoded JSON string if AUTH_TYPE is AUTH_VALUE_JSON_CREDENTIAL_STRING
- AUTH_REFRESH_TOKEN = 'adbc.bigquery.sql.auth.refresh_token'¶
- AUTH_TYPE = 'adbc.bigquery.sql.auth_type'¶
Specify auth type to use for bigquery connection based on what is supported by the bigquery driver. Default is “auth_bigquery” (use AUTH_VALUE_* consts to specify desired authentication type).
- AUTH_VALUE_BIGQUERY = 'adbc.bigquery.sql.auth_type.auth_bigquery'¶
Use the default authentication method implemented in Google Cloud SDK
- AUTH_VALUE_JSON_CREDENTIAL_FILE = 'adbc.bigquery.sql.auth_type.json_credential_file'¶
Specify to use a JSON credentials file for authentication
- AUTH_VALUE_JSON_CREDENTIAL_STRING = 'adbc.bigquery.sql.auth_type.json_credential_string'¶
Specify to use a JSON credentials string for authentication
- AUTH_VALUE_USER_AUTHENTICATION = 'adbc.bigquery.sql.auth_type.user_authentication'¶
Specify to use access token for authentication
- DATASET_ID = 'adbc.bigquery.sql.dataset_id'¶
Specify the dataset ID to use for bigquery connection.
- PROJECT_ID = 'adbc.bigquery.sql.project_id'¶
Specify the project ID to use for bigquery connection.
- TABLE_ID = 'adbc.bigquery.sql.table_id'¶
Specify the table ID to use for bigquery connection.
- class adbc_driver_bigquery.StatementOptions(*values)¶
Bases:
Enum
Statement options specific to the BigQuery driver.
- ALLOW_LARGE_RESULTS = 'adbc.bigquery.sql.query.allow_large_results'¶
ALLOW_LARGE_RESULTS allows the query to produce arbitrarily large result tables. The destination must be a table. When using this option, queries will take longer to execute, even if the result set is small.
For additional limitations, see: https://cloud.google.com/bigquery/querying-data#largequeryresults
- CREATE_DISPOSITION = 'adbc.bigquery.sql.query.create_disposition'¶
CREATE_DISPOSITION specifies the circumstances under which the destination table will be created. The default is
CREATE_IF_NEEDED
.The following values are supported:
CREATE_IF_NEEDED
:Will create the table if it does not already exist. Tables are created atomically on successful completion of a job.
CREATE_NEVER
:Ensures the table must already exist and will not be automatically created.
- CREATE_SESSION = 'adbc.bigquery.sql.query.create_session'¶
CreateSession will trigger creation of a new session when true.
- DEFAULT_DATASET_ID = 'adbc.bigquery.sql.query.default_dataset_id'¶
- DEFAULT_PROJECT_ID = 'adbc.bigquery.sql.query.default_project_id'¶
DEFAULT_PROJECT_ID and DEFAULT_DATASET_ID specify the dataset to use for unqualified table names in the query. If DEFAULT_PROJECT_ID is set, DEFAULT_DATASET_ID must also be set.
- DESTINATION_TABLE = 'adbc.bigquery.sql.query.destination_table'¶
The destination table is the table into which the results of the query will be written. If this field is nil, a temporary table will be created.
- DISABLE_FLATTEN_RESULTS = 'adbc.bigquery.sql.query.disable_flatten_results'¶
DISABLE_FLATTEN_RESULTS prevents results being flattened. If this field is false, results from nested and repeated fields are flattened.
DISABLE_FLATTEN_RESULTS implies ALLOW_LARGE_RESULTS.
For more information, see: https://cloud.google.com/bigquery/docs/data#nested
- DISABLE_QUERY_CACHE = 'adbc.bigquery.sql.query.disable_query_cache'¶
DISABLE_QUERY_CACHE prevents results being fetched from the query cache. If this field is false, results are fetched from the cache if they are available.
The query cache is a best-effort cache that is flushed whenever tables in the query are modified. Cached results are only available when TableID is unspecified in the query’s destination Table.
For more information, see: https://cloud.google.com/bigquery/querying-data#querycaching
- DRY_RUN = 'adbc.bigquery.sql.query.dry_run'¶
If true, don’t actually run this job. A valid query will return a mostly empty response with some processing statistics, while an invalid query will return the same error it would if it wasn’t a dry run.
- JOB_TIMEOUT = 'adbc.bigquery.sql.query.job_timeout'¶
Sets a best-effort deadline on a specific job. If job execution exceeds this timeout, BigQuery may attempt to cancel this work automatically.
This deadline cannot be adjusted or removed once the job is created.
- MAX_BILLING_TIER = 'adbc.bigquery.sql.query.max_billing_tier'¶
MAX_BILLING_TIER sets the maximum billing tier for a Query. Queries that have resource usage beyond this tier will fail (without incurring a charge). If this field is zero, the project default will be used.
- MAX_BYTES_BILLED = 'adbc.bigquery.sql.query.max_bytes_billed'¶
MAX_BYTES_BILLED limits the number of bytes billed for this job. Queries that would exceed this limit will fail (without incurring a charge). If this field is less than 1, the project default will be used.
- PRIORITY = 'adbc.bigquery.sql.query.priority'¶
PRIORITY specifies the priority with which to schedule the query. The default priority is
INTERACTIVE
.For more information, see: https://cloud.google.com/bigquery/querying-data#batchqueries
The following values are supported:
BATCH
:BatchPriority specifies that the query should be scheduled with the batch priority. BigQuery queues each batch query on your behalf, and starts the query as soon as idle resources are available, usually within a few minutes. If BigQuery hasn’t started the query within 24 hours, BigQuery changes the job priority to interactive. Batch queries don’t count towards your concurrent rate limit, which can make it easier to start many queries at once. More information can be found at: https://cloud.google.com/bigquery/docs/running-queries#batchqueries
INTERACTIVE
:Specifies that the query should be scheduled with interactive priority, which means that the query is executed as soon as possible. Interactive queries count towards your concurrent rate limit and your daily limit. It is the default priority with which queries get executed. More information can be found at: https://cloud.google.com/bigquery/docs/running-queries#queries
- USE_LEGACY_SQL = 'adbc.bigquery.sql.query.use_legacy_sql'¶
USE_LEGACY_SQL causes the query to use legacy SQL.
- WRITE_DISPOSITION = 'adbc.bigquery.sql.query.write_disposition'¶
WRITE_DISPOSITION specifies how existing data in the destination table is treated. The default is
WRITE_EMPTY
.The following values are supported:
WRITE_APPEND
:Will append to any existing data in the destination table. Data is appended atomically on successful completion of a job.
WRITE_TRUNCATE
:Overrides the existing data in the destination table. Data is overwritten atomically on successful completion of a job.
WRITE_EMPTY
:Fails writes if the destination table already contains data.
DBAPI 2.0 API¶
DBAPI 2.0-compatible facade for the ADBC BigQuery driver.
- adbc_driver_bigquery.dbapi.connect(db_kwargs: Dict[str, str] | None = None, conn_kwargs: Dict[str, str] | None = None, **kwargs) Connection ¶
Connect to BigQuery via ADBC.
- Parameters:
- db_kwargsdict, optional
Initial database connection parameters.
- conn_kwargsdict, optional
Connection-specific parameters. (ADBC differentiates between a ‘database’ object shared between multiple ‘connection’ objects.)