Cloud SQL Admin API . instances

Instance Methods

ListServerCertificates(project, instance, x__xgafv=None)

Lists all versions of server certificates and certificate authorities (CAs) for the specified instance. There can be up to three sets of certs listed: the certificate that is currently in use, a future that has been added but not yet used to sign a certificate, and a certificate that has been rotated out. For instances not using Certificate Authority Service (CAS) server CA, use ListServerCas instead.

RotateServerCertificate(project, instance, body=None, x__xgafv=None)

Rotates the server certificate version to one previously added with the addServerCertificate method. For instances not using Certificate Authority Service (CAS) server CA, use RotateServerCa instead.

acquireSsrsLease(project, instance, body=None, x__xgafv=None)

Acquire a lease for the setup of SQL Server Reporting Services (SSRS).

addServerCa(project, instance, x__xgafv=None)

Add a new trusted Certificate Authority (CA) version for the specified instance. Required to prepare for a certificate rotation. If a CA version was previously added but never used in a certificate rotation, this operation replaces that version. There cannot be more than one CA version waiting to be rotated in. For instances that have enabled Certificate Authority Service (CAS) based server CA, use AddServerCertificate to add a new server certificate.

addServerCertificate(project, instance, x__xgafv=None)

Add a new trusted server certificate version for the specified instance using Certificate Authority Service (CAS) server CA. Required to prepare for a certificate rotation. If a server certificate version was previously added but never used in a certificate rotation, this operation replaces that version. There cannot be more than one certificate version waiting to be rotated in. For instances not using CAS server CA, use AddServerCa instead.

clone(project, instance, body=None, x__xgafv=None)

Creates a Cloud SQL instance as a clone of the source instance. Using this operation might cause your instance to restart.

close()

Close httplib2 connections.

delete(project, instance, x__xgafv=None)

Deletes a Cloud SQL instance.

demote(project, instance, body=None, x__xgafv=None)

Demotes an existing standalone instance to be a Cloud SQL read replica for an external database server.

demoteMaster(project, instance, body=None, x__xgafv=None)

Demotes the stand-alone instance to be a Cloud SQL read replica for an external database server.

export(project, instance, body=None, x__xgafv=None)

Exports data from a Cloud SQL instance to a Cloud Storage bucket as a SQL dump or CSV file.

failover(project, instance, body=None, x__xgafv=None)

Initiates a manual failover of a high availability (HA) primary instance to a standby instance, which becomes the primary instance. Users are then rerouted to the new primary. For more information, see the [Overview of high availability](https://cloud.google.com/sql/docs/mysql/high-availability) page in the Cloud SQL documentation. If using Legacy HA (MySQL only), this causes the instance to failover to its failover replica instance.

get(project, instance, x__xgafv=None)

Retrieves a resource containing information about a Cloud SQL instance.

import_(project, instance, body=None, x__xgafv=None)

Imports data into a Cloud SQL instance from a SQL dump or CSV file in Cloud Storage.

insert(project, body=None, x__xgafv=None)

Creates a new Cloud SQL instance.

list(project, filter=None, maxResults=None, pageToken=None, x__xgafv=None)

Lists instances under a given project.

listServerCas(project, instance, x__xgafv=None)

Lists all of the trusted Certificate Authorities (CAs) for the specified instance. There can be up to three CAs listed: the CA that was used to sign the certificate that is currently in use, a CA that has been added but not yet used to sign a certificate, and a CA used to sign a certificate that has previously rotated out.

list_next()

Retrieves the next page of results.

patch(project, instance, body=None, x__xgafv=None)

Partially updates settings of a Cloud SQL instance by merging the request with the current configuration. This method supports patch semantics.

promoteReplica(project, instance, failover=None, x__xgafv=None)

Promotes the read replica instance to be an independent Cloud SQL primary instance. Using this operation might cause your instance to restart.

reencrypt(project, instance, body=None, x__xgafv=None)

Reencrypt CMEK instance with latest key version.

releaseSsrsLease(project, instance, x__xgafv=None)

Release a lease for the setup of SQL Server Reporting Services (SSRS).

resetSslConfig(project, instance, x__xgafv=None)

Deletes all client certificates and generates a new server SSL certificate for the instance.

restart(project, instance, x__xgafv=None)

Restarts a Cloud SQL instance.

restoreBackup(project, instance, body=None, x__xgafv=None)

Restores a backup of a Cloud SQL instance. Using this operation might cause your instance to restart.

rotateServerCa(project, instance, body=None, x__xgafv=None)

Rotates the server certificate to one signed by the Certificate Authority (CA) version previously added with the addServerCA method. For instances that have enabled Certificate Authority Service (CAS) based server CA, use RotateServerCertificate to rotate the server certificate.

startReplica(project, instance, x__xgafv=None)

Starts the replication in the read replica instance.

stopReplica(project, instance, x__xgafv=None)

Stops the replication in the read replica instance.

switchover(project, instance, dbTimeout=None, x__xgafv=None)

Switches over from the primary instance to the designated DR replica instance.

truncateLog(project, instance, body=None, x__xgafv=None)

Truncate MySQL general and slow query log tables MySQL only.

update(project, instance, body=None, x__xgafv=None)

Updates settings of a Cloud SQL instance. Using this operation might cause your instance to restart.

Method Details

ListServerCertificates(project, instance, x__xgafv=None)
Lists all versions of server certificates and certificate authorities (CAs) for the specified instance. There can be up to three sets of certs listed: the certificate that is currently in use, a future that has been added but not yet used to sign a certificate, and a certificate that has been rotated out. For instances not using Certificate Authority Service (CAS) server CA, use ListServerCas instead.

Args:
  project: string, Required. Project ID of the project that contains the instance. (required)
  instance: string, Required. Cloud SQL instance ID. This does not include the project ID. (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Instances ListServerCertificatess response.
  "activeVersion": "A String", # The `sha1_fingerprint` of the active certificate from `server_certs`.
  "caCerts": [ # List of server CA certificates for the instance.
    { # SslCerts Resource
      "cert": "A String", # PEM representation.
      "certSerialNumber": "A String", # Serial number, as extracted from the certificate.
      "commonName": "A String", # User supplied name. Constrained to [a-zA-Z.-_ ]+.
      "createTime": "A String", # The time when the certificate was created in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
      "expirationTime": "A String", # The time when the certificate expires in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
      "instance": "A String", # Name of the database instance.
      "kind": "A String", # This is always `sql#sslCert`.
      "selfLink": "A String", # The URI of this resource.
      "sha1Fingerprint": "A String", # Sha1 Fingerprint.
    },
  ],
  "kind": "A String", # This is always `sql#instancesListServerCertificates`.
  "serverCerts": [ # List of server certificates for the instance, signed by the corresponding CA from the `ca_certs` list.
    { # SslCerts Resource
      "cert": "A String", # PEM representation.
      "certSerialNumber": "A String", # Serial number, as extracted from the certificate.
      "commonName": "A String", # User supplied name. Constrained to [a-zA-Z.-_ ]+.
      "createTime": "A String", # The time when the certificate was created in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
      "expirationTime": "A String", # The time when the certificate expires in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
      "instance": "A String", # Name of the database instance.
      "kind": "A String", # This is always `sql#sslCert`.
      "selfLink": "A String", # The URI of this resource.
      "sha1Fingerprint": "A String", # Sha1 Fingerprint.
    },
  ],
}
RotateServerCertificate(project, instance, body=None, x__xgafv=None)
Rotates the server certificate version to one previously added with the addServerCertificate method. For instances not using Certificate Authority Service (CAS) server CA, use RotateServerCa instead.

Args:
  project: string, Required. Project ID of the project that contains the instance. (required)
  instance: string, Required. Cloud SQL instance ID. This does not include the project ID. (required)
  body: object, The request body.
    The object takes the form of:

{ # Rotate Server Certificate request.
  "rotateServerCertificateContext": { # Instance rotate server certificate context. # Optional. Contains details about the rotate server CA operation.
    "kind": "A String", # Optional. This is always `sql#rotateServerCertificateContext`.
    "nextVersion": "A String", # Optional. The fingerprint of the next version to be rotated to. If left unspecified, will be rotated to the most recently added server certificate version.
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
acquireSsrsLease(project, instance, body=None, x__xgafv=None)
Acquire a lease for the setup of SQL Server Reporting Services (SSRS).

Args:
  project: string, Required. ID of the project that contains the instance (Example: project-id). (required)
  instance: string, Required. Cloud SQL instance ID. This doesn't include the project ID. It's composed of lowercase letters, numbers, and hyphens, and it must start with a letter. The total length must be 98 characters or less (Example: instance-id). (required)
  body: object, The request body.
    The object takes the form of:

{ # Request to acquire an SSRS lease for an instance.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # Contains details about the acquire SSRS lease operation.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Acquire SSRS lease response.
  "operationId": "A String", # The unique identifier for this operation.
}
addServerCa(project, instance, x__xgafv=None)
Add a new trusted Certificate Authority (CA) version for the specified instance. Required to prepare for a certificate rotation. If a CA version was previously added but never used in a certificate rotation, this operation replaces that version. There cannot be more than one CA version waiting to be rotated in. For instances that have enabled Certificate Authority Service (CAS) based server CA, use AddServerCertificate to add a new server certificate.

Args:
  project: string, Project ID of the project that contains the instance. (required)
  instance: string, Cloud SQL instance ID. This does not include the project ID. (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
addServerCertificate(project, instance, x__xgafv=None)
Add a new trusted server certificate version for the specified instance using Certificate Authority Service (CAS) server CA. Required to prepare for a certificate rotation. If a server certificate version was previously added but never used in a certificate rotation, this operation replaces that version. There cannot be more than one certificate version waiting to be rotated in. For instances not using CAS server CA, use AddServerCa instead.

Args:
  project: string, Required. Project ID of the project that contains the instance. (required)
  instance: string, Required. Cloud SQL instance ID. This does not include the project ID. (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
clone(project, instance, body=None, x__xgafv=None)
Creates a Cloud SQL instance as a clone of the source instance. Using this operation might cause your instance to restart.

Args:
  project: string, Project ID of the source as well as the clone Cloud SQL instance. (required)
  instance: string, The ID of the Cloud SQL instance to be cloned (source). This does not include the project ID. (required)
  body: object, The request body.
    The object takes the form of:

{ # Database instance clone request.
  "cloneContext": { # Database instance clone context. # Contains details about the clone operation.
    "allocatedIpRange": "A String", # The name of the allocated ip range for the private ip Cloud SQL instance. For example: "google-managed-services-default". If set, the cloned instance ip will be created in the allocated range. The range name must comply with [RFC 1035](https://tools.ietf.org/html/rfc1035). Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])?. Reserved for future use.
    "binLogCoordinates": { # Binary log coordinates. # Binary log coordinates, if specified, identify the position up to which the source instance is cloned. If not specified, the source instance is cloned up to the most recent binary log coordinates.
      "binLogFileName": "A String", # Name of the binary log file for a Cloud SQL instance.
      "binLogPosition": "A String", # Position (offset) within the binary log file.
      "kind": "A String", # This is always `sql#binLogCoordinates`.
    },
    "databaseNames": [ # (SQL Server only) Clone only the specified databases from the source instance. Clone all databases if empty.
      "A String",
    ],
    "destinationInstanceName": "A String", # Name of the Cloud SQL instance to be created as a clone.
    "kind": "A String", # This is always `sql#cloneContext`.
    "pitrTimestampMs": "A String", # Reserved for future use.
    "pointInTime": "A String", # Timestamp, if specified, identifies the time to which the source instance is cloned.
    "preferredSecondaryZone": "A String", # Optional. Copy clone and point-in-time recovery clone of a regional instance in the specified zones. If not specified, clone to the same secondary zone as the source instance. This value cannot be the same as the preferred_zone field.
    "preferredZone": "A String", # Optional. Copy clone and point-in-time recovery clone of an instance to the specified zone. If no zone is specified, clone to the same primary zone as the source instance.
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
close()
Close httplib2 connections.
delete(project, instance, x__xgafv=None)
Deletes a Cloud SQL instance.

Args:
  project: string, Project ID of the project that contains the instance to be deleted. (required)
  instance: string, Cloud SQL instance ID. This does not include the project ID. (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
demote(project, instance, body=None, x__xgafv=None)
Demotes an existing standalone instance to be a Cloud SQL read replica for an external database server.

Args:
  project: string, Required. The project ID of the project that contains the instance. (required)
  instance: string, Required. The name of the Cloud SQL instance. (required)
  body: object, The request body.
    The object takes the form of:

{ # This request is used to demote an existing standalone instance to be a Cloud SQL read replica for an external database server.
  "demoteContext": { # This context is used to demote an existing standalone instance to be a Cloud SQL read replica for an external database server. # Required. This context is used to demote an existing standalone instance to be a Cloud SQL read replica for an external database server.
    "kind": "A String", # This is always `sql#demoteContext`.
    "sourceRepresentativeInstanceName": "A String", # Required. The name of the instance which acts as an on-premises primary instance in the replication setup.
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
demoteMaster(project, instance, body=None, x__xgafv=None)
Demotes the stand-alone instance to be a Cloud SQL read replica for an external database server.

Args:
  project: string, ID of the project that contains the instance. (required)
  instance: string, Cloud SQL instance name. (required)
  body: object, The request body.
    The object takes the form of:

{ # Database demote primary instance request.
  "demoteMasterContext": { # Database instance demote primary instance context. # Contains details about the demoteMaster operation.
    "kind": "A String", # This is always `sql#demoteMasterContext`.
    "masterInstanceName": "A String", # The name of the instance which will act as on-premises primary instance in the replication setup.
    "replicaConfiguration": { # Read-replica configuration for connecting to the on-premises primary instance. # Configuration specific to read-replicas replicating from the on-premises primary instance.
      "kind": "A String", # This is always `sql#demoteMasterConfiguration`.
      "mysqlReplicaConfiguration": { # Read-replica configuration specific to MySQL databases. # MySQL specific configuration when replicating from a MySQL on-premises primary instance. Replication configuration information such as the username, password, certificates, and keys are not stored in the instance metadata. The configuration information is used only to set up the replication connection and is stored by MySQL in a file named `master.info` in the data directory.
        "caCertificate": "A String", # PEM representation of the trusted CA's x509 certificate.
        "clientCertificate": "A String", # PEM representation of the replica's x509 certificate.
        "clientKey": "A String", # PEM representation of the replica's private key. The corresponsing public key is encoded in the client's certificate. The format of the replica's private key can be either PKCS #1 or PKCS #8.
        "kind": "A String", # This is always `sql#demoteMasterMysqlReplicaConfiguration`.
        "password": "A String", # The password for the replication connection.
        "username": "A String", # The username for the replication connection.
      },
    },
    "skipReplicationSetup": True or False, # Flag to skip replication setup on the instance.
    "verifyGtidConsistency": True or False, # Verify the GTID consistency for demote operation. Default value: `True`. Setting this flag to `false` enables you to bypass the GTID consistency check between on-premises primary instance and Cloud SQL instance during the demotion operation but also exposes you to the risk of future replication failures. Change the value only if you know the reason for the GTID divergence and are confident that doing so will not cause any replication issues.
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
export(project, instance, body=None, x__xgafv=None)
Exports data from a Cloud SQL instance to a Cloud Storage bucket as a SQL dump or CSV file.

Args:
  project: string, Project ID of the project that contains the instance to be exported. (required)
  instance: string, The Cloud SQL instance ID. This doesn't include the project ID. (required)
  body: object, The request body.
    The object takes the form of:

{ # Database instance export request.
  "exportContext": { # Database instance export context. # Contains details about the export operation.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
failover(project, instance, body=None, x__xgafv=None)
Initiates a manual failover of a high availability (HA) primary instance to a standby instance, which becomes the primary instance. Users are then rerouted to the new primary. For more information, see the [Overview of high availability](https://cloud.google.com/sql/docs/mysql/high-availability) page in the Cloud SQL documentation. If using Legacy HA (MySQL only), this causes the instance to failover to its failover replica instance.

Args:
  project: string, ID of the project that contains the read replica. (required)
  instance: string, Cloud SQL instance ID. This does not include the project ID. (required)
  body: object, The request body.
    The object takes the form of:

{ # Instance failover request.
  "failoverContext": { # Database instance failover context. # Failover Context.
    "kind": "A String", # This is always `sql#failoverContext`.
    "settingsVersion": "A String", # The current settings version of this instance. Request will be rejected if this version doesn't match the current settings version.
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
get(project, instance, x__xgafv=None)
Retrieves a resource containing information about a Cloud SQL instance.

Args:
  project: string, Project ID of the project that contains the instance. (required)
  instance: string, Database instance ID. This does not include the project ID. (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A Cloud SQL instance resource.
  "availableMaintenanceVersions": [ # Output only. List all maintenance versions applicable on the instance
    "A String",
  ],
  "backendType": "A String", # The backend type. `SECOND_GEN`: Cloud SQL database instance. `EXTERNAL`: A database server that is not managed by Google. This property is read-only; use the `tier` property in the `settings` object to determine the database type.
  "connectionName": "A String", # Connection name of the Cloud SQL instance used in connection strings.
  "createTime": "A String", # Output only. The time when the instance was created in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "currentDiskSize": "A String", # The current disk usage of the instance in bytes. This property has been deprecated. Use the "cloudsql.googleapis.com/database/disk/bytes_used" metric in Cloud Monitoring API instead. Please see [this announcement](https://groups.google.com/d/msg/google-cloud-sql-announce/I_7-F9EBhT0/BtvFtdFeAgAJ) for details.
  "databaseInstalledVersion": "A String", # Output only. Stores the current database version running on the instance including minor version such as `MYSQL_8_0_18`.
  "databaseVersion": "A String", # The database engine type and version. The `databaseVersion` field cannot be changed after instance creation.
  "diskEncryptionConfiguration": { # Disk encryption configuration for an instance. # Disk encryption configuration specific to an instance.
    "kind": "A String", # This is always `sql#diskEncryptionConfiguration`.
    "kmsKeyName": "A String", # Resource name of KMS key for disk encryption
  },
  "diskEncryptionStatus": { # Disk encryption status for an instance. # Disk encryption status specific to an instance.
    "kind": "A String", # This is always `sql#diskEncryptionStatus`.
    "kmsKeyVersionName": "A String", # KMS key version used to encrypt the Cloud SQL instance resource
  },
  "dnsName": "A String", # Output only. The dns name of the instance.
  "etag": "A String", # This field is deprecated and will be removed from a future version of the API. Use the `settings.settingsVersion` field instead.
  "failoverReplica": { # The name and status of the failover replica.
    "available": True or False, # The availability status of the failover replica. A false status indicates that the failover replica is out of sync. The primary instance can only failover to the failover replica when the status is true.
    "name": "A String", # The name of the failover replica. If specified at instance creation, a failover replica is created for the instance. The name doesn't include the project ID.
  },
  "gceZone": "A String", # The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance.
  "geminiConfig": { # Gemini instance configuration. # Gemini instance configuration.
    "activeQueryEnabled": True or False, # Output only. Whether the active query is enabled.
    "entitled": True or False, # Output only. Whether Gemini is enabled.
    "flagRecommenderEnabled": True or False, # Output only. Whether the flag recommender is enabled.
    "googleVacuumMgmtEnabled": True or False, # Output only. Whether the vacuum management is enabled.
    "indexAdvisorEnabled": True or False, # Output only. Whether the index advisor is enabled.
    "oomSessionCancelEnabled": True or False, # Output only. Whether canceling the out-of-memory (OOM) session is enabled.
  },
  "instanceType": "A String", # The instance type.
  "ipAddresses": [ # The assigned IP addresses for the instance.
    { # Database instance IP mapping
      "ipAddress": "A String", # The IP address assigned.
      "timeToRetire": "A String", # The due time for this IP to be retired in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`. This field is only available when the IP is scheduled to be retired.
      "type": "A String", # The type of this IP address. A `PRIMARY` address is a public address that can accept incoming connections. A `PRIVATE` address is a private address that can accept incoming connections. An `OUTGOING` address is the source address of connections originating from the instance, if supported.
    },
  ],
  "ipv6Address": "A String", # The IPv6 address assigned to the instance. (Deprecated) This property was applicable only to First Generation instances.
  "kind": "A String", # This is always `sql#instance`.
  "maintenanceVersion": "A String", # The current software version on the instance.
  "masterInstanceName": "A String", # The name of the instance which will act as primary in the replication setup.
  "maxDiskSize": "A String", # The maximum disk size of the instance in bytes.
  "name": "A String", # Name of the Cloud SQL instance. This does not include the project ID.
  "onPremisesConfiguration": { # On-premises instance configuration. # Configuration specific to on-premises instances.
    "caCertificate": "A String", # PEM representation of the trusted CA's x509 certificate.
    "clientCertificate": "A String", # PEM representation of the replica's x509 certificate.
    "clientKey": "A String", # PEM representation of the replica's private key. The corresponsing public key is encoded in the client's certificate.
    "dumpFilePath": "A String", # The dump file to create the Cloud SQL replica.
    "hostPort": "A String", # The host and port of the on-premises instance in host:port format
    "kind": "A String", # This is always `sql#onPremisesConfiguration`.
    "password": "A String", # The password for connecting to on-premises instance.
    "selectedObjects": [ # Optional. A list of objects that the user selects for replication from an external source instance.
      { # A list of objects that the user selects for replication from an external source instance.
        "database": "A String", # Required. The name of the database to migrate.
      },
    ],
    "sourceInstance": { # Reference to another Cloud SQL instance. # The reference to Cloud SQL instance if the source is Cloud SQL.
      "name": "A String", # The name of the Cloud SQL instance being referenced. This does not include the project ID.
      "project": "A String", # The project ID of the Cloud SQL instance being referenced. The default is the same project ID as the instance references it.
      "region": "A String", # The region of the Cloud SQL instance being referenced.
    },
    "sslOption": "A String", # Optional. SslOption for replica connection to the on-premises source.
    "username": "A String", # The username for connecting to on-premises instance.
  },
  "outOfDiskReport": { # This message wraps up the information written by out-of-disk detection job. # This field represents the report generated by the proactive database wellness job for OutOfDisk issues. * Writers: * the proactive database wellness job for OOD. * Readers: * the proactive database wellness job
    "sqlMinRecommendedIncreaseSizeGb": 42, # The minimum recommended increase size in GigaBytes This field is consumed by the frontend * Writers: * the proactive database wellness job for OOD. * Readers:
    "sqlOutOfDiskState": "A String", # This field represents the state generated by the proactive database wellness job for OutOfDisk issues. * Writers: * the proactive database wellness job for OOD. * Readers: * the proactive database wellness job
  },
  "primaryDnsName": "A String", # Output only. DEPRECATED: please use write_endpoint instead.
  "project": "A String", # The project ID of the project containing the Cloud SQL instance. The Google apps domain is prefixed if applicable.
  "pscServiceAttachmentLink": "A String", # Output only. The link to service attachment of PSC instance.
  "region": "A String", # The geographical region of the Cloud SQL instance. It can be one of the [regions](https://cloud.google.com/sql/docs/mysql/locations#location-r) where Cloud SQL operates: For example, `asia-east1`, `europe-west1`, and `us-central1`. The default value is `us-central1`.
  "replicaConfiguration": { # Read-replica configuration for connecting to the primary instance. # Configuration specific to failover replicas and read replicas.
    "cascadableReplica": True or False, # Optional. Specifies if a SQL Server replica is a cascadable replica. A cascadable replica is a SQL Server cross region replica that supports replica(s) under it.
    "failoverTarget": True or False, # Specifies if the replica is the failover target. If the field is set to `true` the replica will be designated as a failover replica. In case the primary instance fails, the replica instance will be promoted as the new primary instance. Only one replica can be specified as failover target, and the replica has to be in different zone with the primary instance.
    "kind": "A String", # This is always `sql#replicaConfiguration`.
    "mysqlReplicaConfiguration": { # Read-replica configuration specific to MySQL databases. # MySQL specific configuration when replicating from a MySQL on-premises primary instance. Replication configuration information such as the username, password, certificates, and keys are not stored in the instance metadata. The configuration information is used only to set up the replication connection and is stored by MySQL in a file named `master.info` in the data directory.
      "caCertificate": "A String", # PEM representation of the trusted CA's x509 certificate.
      "clientCertificate": "A String", # PEM representation of the replica's x509 certificate.
      "clientKey": "A String", # PEM representation of the replica's private key. The corresponsing public key is encoded in the client's certificate.
      "connectRetryInterval": 42, # Seconds to wait between connect retries. MySQL's default is 60 seconds.
      "dumpFilePath": "A String", # Path to a SQL dump file in Google Cloud Storage from which the replica instance is to be created. The URI is in the form gs://bucketName/fileName. Compressed gzip files (.gz) are also supported. Dumps have the binlog co-ordinates from which replication begins. This can be accomplished by setting --master-data to 1 when using mysqldump.
      "kind": "A String", # This is always `sql#mysqlReplicaConfiguration`.
      "masterHeartbeatPeriod": "A String", # Interval in milliseconds between replication heartbeats.
      "password": "A String", # The password for the replication connection.
      "sslCipher": "A String", # A list of permissible ciphers to use for SSL encryption.
      "username": "A String", # The username for the replication connection.
      "verifyServerCertificate": True or False, # Whether or not to check the primary instance's Common Name value in the certificate that it sends during the SSL handshake.
    },
  },
  "replicaNames": [ # The replicas of the instance.
    "A String",
  ],
  "replicationCluster": { # A primary instance and disaster recovery (DR) replica pair. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure. Only applicable to MySQL. # A primary instance and disaster recovery (DR) replica pair. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance experiences regional failure. Only applicable to MySQL.
    "drReplica": True or False, # Output only. Read-only field that indicates whether the replica is a DR replica. This field is not set if the instance is a primary instance.
    "failoverDrReplicaName": "A String", # Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Set this field to a replica name to designate a DR replica for a primary instance. Remove the replica name to remove the DR replica designation.
    "psaWriteEndpoint": "A String", # Output only. If set, it indicates this instance has a private service access (PSA) dns endpoint that is pointing to the primary instance of the cluster. If this instance is the primary, the dns should be pointing to this instance. After Switchover or Replica failover, this DNS endpoint points to the promoted instance. This is a read-only field, returned to the user as information. This field can exist even if a standalone instance does not yet have a replica, or had a DR replica that was deleted.
  },
  "rootPassword": "A String", # Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances.
  "satisfiesPzi": True or False, # Output only. This status indicates whether the instance satisfies PZI. The status is reserved for future use.
  "satisfiesPzs": True or False, # This status indicates whether the instance satisfies PZS. The status is reserved for future use.
  "scheduledMaintenance": { # Any scheduled maintenance for this instance. # The start time of any upcoming scheduled maintenance for this instance.
    "canDefer": True or False,
    "canReschedule": True or False, # If the scheduled maintenance can be rescheduled.
    "scheduleDeadlineTime": "A String", # Maintenance cannot be rescheduled to start beyond this deadline.
    "startTime": "A String", # The start time of any upcoming scheduled maintenance for this instance.
  },
  "secondaryGceZone": "A String", # The Compute Engine zone that the failover instance is currently serving from for a regional instance. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary/failover zone.
  "selfLink": "A String", # The URI of this resource.
  "serverCaCert": { # SslCerts Resource # SSL configuration.
    "cert": "A String", # PEM representation.
    "certSerialNumber": "A String", # Serial number, as extracted from the certificate.
    "commonName": "A String", # User supplied name. Constrained to [a-zA-Z.-_ ]+.
    "createTime": "A String", # The time when the certificate was created in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
    "expirationTime": "A String", # The time when the certificate expires in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
    "instance": "A String", # Name of the database instance.
    "kind": "A String", # This is always `sql#sslCert`.
    "selfLink": "A String", # The URI of this resource.
    "sha1Fingerprint": "A String", # Sha1 Fingerprint.
  },
  "serviceAccountEmailAddress": "A String", # The service account email address assigned to the instance. \This property is read-only.
  "settings": { # Database instance settings. # The user settings.
    "activationPolicy": "A String", # The activation policy specifies when the instance is activated; it is applicable only when the instance state is RUNNABLE. Valid values: * `ALWAYS`: The instance is on, and remains so even in the absence of connection requests. * `NEVER`: The instance is off; it is not activated, even if a connection request arrives.
    "activeDirectoryConfig": { # Active Directory configuration, relevant only for Cloud SQL for SQL Server. # Active Directory configuration, relevant only for Cloud SQL for SQL Server.
      "domain": "A String", # The name of the domain (e.g., mydomain.com).
      "kind": "A String", # This is always sql#activeDirectoryConfig.
    },
    "advancedMachineFeatures": { # Specifies options for controlling advanced machine features. # Specifies advanced machine configuration for the instances relevant only for SQL Server.
      "threadsPerCore": 42, # The number of threads per physical core.
    },
    "authorizedGaeApplications": [ # The App Engine app IDs that can access this instance. (Deprecated) Applied to First Generation instances only.
      "A String",
    ],
    "availabilityType": "A String", # Availability type. Potential values: * `ZONAL`: The instance serves data from only one zone. Outages in that zone affect data accessibility. * `REGIONAL`: The instance can serve data from more than one zone in a region (it is highly available)./ For more information, see [Overview of the High Availability Configuration](https://cloud.google.com/sql/docs/mysql/high-availability).
    "backupConfiguration": { # Database instance backup configuration. # The daily backup configuration for the instance.
      "backupRetentionSettings": { # We currently only support backup retention by specifying the number of backups we will retain. # Backup retention settings.
        "retainedBackups": 42, # Depending on the value of retention_unit, this is used to determine if a backup needs to be deleted. If retention_unit is 'COUNT', we will retain this many backups.
        "retentionUnit": "A String", # The unit that 'retained_backups' represents.
      },
      "binaryLogEnabled": True or False, # (MySQL only) Whether binary log is enabled. If backup configuration is disabled, binarylog must be disabled as well.
      "enabled": True or False, # Whether this configuration is enabled.
      "kind": "A String", # This is always `sql#backupConfiguration`.
      "location": "A String", # Location of the backup
      "pointInTimeRecoveryEnabled": True or False, # Whether point in time recovery is enabled.
      "replicationLogArchivingEnabled": True or False, # Reserved for future use.
      "startTime": "A String", # Start time for the daily backup configuration in UTC timezone in the 24 hour format - `HH:MM`.
      "transactionLogRetentionDays": 42, # The number of days of transaction logs we retain for point in time restore, from 1-7.
      "transactionalLogStorageState": "A String", # Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery.
    },
    "collation": "A String", # The name of server Instance collation.
    "connectorEnforcement": "A String", # Specifies if connections must use Cloud SQL connectors. Option values include the following: `NOT_REQUIRED` (Cloud SQL instances can be connected without Cloud SQL Connectors) and `REQUIRED` (Only allow connections that use Cloud SQL Connectors) Note that using REQUIRED disables all existing authorized networks. If this field is not specified when creating a new instance, NOT_REQUIRED is used. If this field is not specified when patching or updating an existing instance, it is left unchanged in the instance.
    "crashSafeReplicationEnabled": True or False, # Configuration specific to read replica instances. Indicates whether database flags for crash-safe replication are enabled. This property was only applicable to First Generation instances.
    "dataCacheConfig": { # Data cache configurations. # Configuration for data cache.
      "dataCacheEnabled": True or False, # Whether data cache is enabled for the instance.
    },
    "dataDiskSizeGb": "A String", # The size of data disk, in GB. The data disk size minimum is 10GB.
    "dataDiskType": "A String", # The type of data disk: `PD_SSD` (default) or `PD_HDD`. Not used for First Generation instances.
    "databaseFlags": [ # The database flags passed to the instance at startup.
      { # Database flags for Cloud SQL instances.
        "name": "A String", # The name of the flag. These flags are passed at instance startup, so include both server options and system variables. Flags are specified with underscores, not hyphens. For more information, see [Configuring Database Flags](https://cloud.google.com/sql/docs/mysql/flags) in the Cloud SQL documentation.
        "value": "A String", # The value of the flag. Boolean flags are set to `on` for true and `off` for false. This field must be omitted if the flag doesn't take a value.
      },
    ],
    "databaseReplicationEnabled": True or False, # Configuration specific to read replica instances. Indicates whether replication is enabled or not. WARNING: Changing this restarts the instance.
    "deletionProtectionEnabled": True or False, # Configuration to protect against accidental instance deletion.
    "denyMaintenancePeriods": [ # Deny maintenance periods
      { # Deny Maintenance Periods. This specifies a date range during when all CSA rollout will be denied.
        "endDate": "A String", # "deny maintenance period" end date. If the year of the end date is empty, the year of the start date also must be empty. In this case, it means the deny maintenance period recurs every year. The date is in format yyyy-mm-dd i.e., 2020-11-01, or mm-dd, i.e., 11-01
        "startDate": "A String", # "deny maintenance period" start date. If the year of the start date is empty, the year of the end date also must be empty. In this case, it means the deny maintenance period recurs every year. The date is in format yyyy-mm-dd i.e., 2020-11-01, or mm-dd, i.e., 11-01
        "time": "A String", # Time in UTC when the "deny maintenance period" starts on start_date and ends on end_date. The time is in format: HH:mm:SS, i.e., 00:00:00
      },
    ],
    "edition": "A String", # Optional. The edition of the instance.
    "enableDataplexIntegration": True or False, # Optional. By default, Cloud SQL instances have schema extraction disabled for Dataplex. When this parameter is set to true, schema extraction for Dataplex on Cloud SQL instances is activated.
    "enableGoogleMlIntegration": True or False, # Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances.
    "insightsConfig": { # Insights configuration. This specifies when Cloud SQL Insights feature is enabled and optional configuration. # Insights configuration, for now relevant only for Postgres.
      "queryInsightsEnabled": True or False, # Whether Query Insights feature is enabled.
      "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. Default is 5.
      "queryStringLength": 42, # Maximum query length stored in bytes. Default value: 1024 bytes. Range: 256-4500 bytes. Query length more than this field value will be truncated to this value. When unset, query length will be the default value. Changing query length will restart the database.
      "recordApplicationTags": True or False, # Whether Query Insights will record application tags from query when enabled.
      "recordClientAddress": True or False, # Whether Query Insights will record client address when enabled.
    },
    "ipConfiguration": { # IP Management configuration. # The settings for IP Management. This allows to enable or disable the instance IP and manage which external networks can connect to the instance. The IPv4 address cannot be disabled for Second Generation instances.
      "allocatedIpRange": "A String", # The name of the allocated ip range for the private ip Cloud SQL instance. For example: "google-managed-services-default". If set, the instance ip will be created in the allocated range. The range name must comply with [RFC 1035](https://tools.ietf.org/html/rfc1035). Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?.`
      "authorizedNetworks": [ # The list of external networks that are allowed to connect to the instance using the IP. In 'CIDR' notation, also known as 'slash' notation (for example: `157.197.200.0/24`).
        { # An entry for an Access Control list.
          "expirationTime": "A String", # The time when this access control entry expires in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
          "kind": "A String", # This is always `sql#aclEntry`.
          "name": "A String", # Optional. A label to identify this entry.
          "value": "A String", # The allowlisted value for the access control list.
        },
      ],
      "enablePrivatePathForGoogleCloudServices": True or False, # Controls connectivity to private IP instances from Google services, such as BigQuery.
      "ipv4Enabled": True or False, # Whether the instance is assigned a public IP address or not.
      "privateNetwork": "A String", # The resource link for the VPC network from which the Cloud SQL instance is accessible for private IP. For example, `/projects/myProject/global/networks/default`. This setting can be updated, but it cannot be removed after it is set.
      "pscConfig": { # PSC settings for a Cloud SQL instance. # PSC settings for this instance.
        "allowedConsumerProjects": [ # Optional. The list of consumer projects that are allow-listed for PSC connections to this instance. This instance can be connected to with PSC from any network in these projects. Each consumer project in this list may be represented by a project number (numeric) or by a project id (alphanumeric).
          "A String",
        ],
        "pscAutoConnections": [ # Optional. The list of settings for requested Private Service Connect consumer endpoints that can be used to connect to this Cloud SQL instance.
          { # Settings for an automatically-setup Private Service Connect consumer endpoint that is used to connect to a Cloud SQL instance.
            "consumerNetwork": "A String", # The consumer network of this consumer endpoint. This must be a resource path that includes both the host project and the network name. For example, `projects/project1/global/networks/network1`. The consumer host project of this network might be different from the consumer service project.
            "consumerNetworkStatus": "A String", # The connection policy status of the consumer network.
            "consumerProject": "A String", # This is the project ID of consumer service project of this consumer endpoint. Optional. This is only applicable if consumer_network is a shared vpc network.
            "ipAddress": "A String", # The IP address of the consumer endpoint.
            "status": "A String", # The connection status of the consumer endpoint.
          },
        ],
        "pscEnabled": True or False, # Whether PSC connectivity is enabled for this instance.
      },
      "requireSsl": True or False, # Use `ssl_mode` instead. Whether SSL/TLS connections over IP are enforced. If set to false, then allow both non-SSL/non-TLS and SSL/TLS connections. For SSL/TLS connections, the client certificate won't be verified. If set to true, then only allow connections encrypted with SSL/TLS and with valid client certificates. If you want to enforce SSL/TLS without enforcing the requirement for valid client certificates, then use the `ssl_mode` flag instead of the legacy `require_ssl` flag.
      "serverCaMode": "A String", # Specify what type of CA is used for the server certificate.
      "sslMode": "A String", # Specify how SSL/TLS is enforced in database connections. If you must use the `require_ssl` flag for backward compatibility, then only the following value pairs are valid: For PostgreSQL and MySQL: * `ssl_mode=ALLOW_UNENCRYPTED_AND_ENCRYPTED` and `require_ssl=false` * `ssl_mode=ENCRYPTED_ONLY` and `require_ssl=false` * `ssl_mode=TRUSTED_CLIENT_CERTIFICATE_REQUIRED` and `require_ssl=true` For SQL Server: * `ssl_mode=ALLOW_UNENCRYPTED_AND_ENCRYPTED` and `require_ssl=false` * `ssl_mode=ENCRYPTED_ONLY` and `require_ssl=true` The value of `ssl_mode` has priority over the value of `require_ssl`. For example, for the pair `ssl_mode=ENCRYPTED_ONLY` and `require_ssl=false`, `ssl_mode=ENCRYPTED_ONLY` means accept only SSL connections, while `require_ssl=false` means accept both non-SSL and SSL connections. In this case, MySQL and PostgreSQL databases respect `ssl_mode` and accepts only SSL connections.
    },
    "kind": "A String", # This is always `sql#settings`.
    "locationPreference": { # Preferred location. This specifies where a Cloud SQL instance is located. Note that if the preferred location is not available, the instance will be located as close as possible within the region. Only one location may be specified. # The location preference settings. This allows the instance to be located as near as possible to either an App Engine app or Compute Engine zone for better performance. App Engine co-location was only applicable to First Generation instances.
      "followGaeApplication": "A String", # The App Engine application to follow, it must be in the same region as the Cloud SQL instance. WARNING: Changing this might restart the instance.
      "kind": "A String", # This is always `sql#locationPreference`.
      "secondaryZone": "A String", # The preferred Compute Engine zone for the secondary/failover (for example: us-central1-a, us-central1-b, etc.). To disable this field, set it to 'no_secondary_zone'.
      "zone": "A String", # The preferred Compute Engine zone (for example: us-central1-a, us-central1-b, etc.). WARNING: Changing this might restart the instance.
    },
    "maintenanceWindow": { # Maintenance window. This specifies when a Cloud SQL instance is restarted for system maintenance purposes. # The maintenance window for this instance. This specifies when the instance can be restarted for maintenance purposes.
      "day": 42, # Day of week - `MONDAY`, `TUESDAY`, `WEDNESDAY`, `THURSDAY`, `FRIDAY`, `SATURDAY`, or `SUNDAY`. Specify in the UTC time zone. Returned in output as an integer, 1 to 7, where `1` equals Monday.
      "hour": 42, # Hour of day - 0 to 23. Specify in the UTC time zone.
      "kind": "A String", # This is always `sql#maintenanceWindow`.
      "updateTrack": "A String", # Maintenance timing settings: `canary`, `stable`, or `week5`. For more information, see [About maintenance on Cloud SQL instances](https://cloud.google.com/sql/docs/mysql/maintenance).
    },
    "passwordValidationPolicy": { # Database instance local user password validation policy # The local user password validation policy of the instance.
      "complexity": "A String", # The complexity of the password.
      "disallowCompromisedCredentials": True or False, # This field is deprecated and will be removed in a future version of the API.
      "disallowUsernameSubstring": True or False, # Disallow username as a part of the password.
      "enablePasswordPolicy": True or False, # Whether the password policy is enabled or not.
      "minLength": 42, # Minimum number of characters allowed.
      "passwordChangeInterval": "A String", # Minimum interval after which the password can be changed. This flag is only supported for PostgreSQL.
      "reuseInterval": 42, # Number of previous passwords that cannot be reused.
    },
    "pricingPlan": "A String", # The pricing plan for this instance. This can be either `PER_USE` or `PACKAGE`. Only `PER_USE` is supported for Second Generation instances.
    "replicationType": "A String", # The type of replication this instance uses. This can be either `ASYNCHRONOUS` or `SYNCHRONOUS`. (Deprecated) This property was only applicable to First Generation instances.
    "settingsVersion": "A String", # The version of instance settings. This is a required field for update method to make sure concurrent updates are handled properly. During update, use the most recent settingsVersion value for this instance and do not try to update this value.
    "sqlServerAuditConfig": { # SQL Server specific audit configuration. # SQL Server specific audit configuration.
      "bucket": "A String", # The name of the destination bucket (e.g., gs://mybucket).
      "kind": "A String", # This is always sql#sqlServerAuditConfig
      "retentionInterval": "A String", # How long to keep generated audit files.
      "uploadInterval": "A String", # How often to upload generated audit files.
    },
    "storageAutoResize": True or False, # Configuration to increase storage size automatically. The default value is true.
    "storageAutoResizeLimit": "A String", # The maximum size to which storage capacity can be automatically increased. The default value is 0, which specifies that there is no limit.
    "tier": "A String", # The tier (or machine type) for this instance, for example `db-custom-1-3840`. WARNING: Changing this restarts the instance.
    "timeZone": "A String", # Server timezone, relevant only for Cloud SQL for SQL Server.
    "userLabels": { # User-provided labels, represented as a dictionary where each label is a single key value pair.
      "a_key": "A String",
    },
  },
  "sqlNetworkArchitecture": "A String", # The SQL network architecture for the instance.
  "state": "A String", # The current serving state of the Cloud SQL instance.
  "suspensionReason": [ # If the instance state is SUSPENDED, the reason for the suspension.
    "A String",
  ],
  "switchTransactionLogsToCloudStorageEnabled": True or False, # Input only. Whether Cloud SQL is enabled to switch storing point-in-time recovery log files from a data disk to Cloud Storage.
  "upgradableDatabaseVersions": [ # Output only. All database versions that are available for upgrade.
    { # An available database version. It can be a major or a minor version.
      "displayName": "A String", # The database version's display name.
      "majorVersion": "A String", # The version's major version name.
      "name": "A String", # The database version name. For MySQL 8.0, this string provides the database major and minor version.
    },
  ],
  "writeEndpoint": "A String", # Output only. The dns name of the primary instance in a replication group.
}
import_(project, instance, body=None, x__xgafv=None)
Imports data into a Cloud SQL instance from a SQL dump or CSV file in Cloud Storage.

Args:
  project: string, Project ID of the project that contains the instance. (required)
  instance: string, Cloud SQL instance ID. This does not include the project ID. (required)
  body: object, The request body.
    The object takes the form of:

{ # Database instance import request.
  "importContext": { # Database instance import context. # Contains details about the import operation.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
insert(project, body=None, x__xgafv=None)
Creates a new Cloud SQL instance.

Args:
  project: string, Project ID of the project to which the newly created Cloud SQL instances should belong. (required)
  body: object, The request body.
    The object takes the form of:

{ # A Cloud SQL instance resource.
  "availableMaintenanceVersions": [ # Output only. List all maintenance versions applicable on the instance
    "A String",
  ],
  "backendType": "A String", # The backend type. `SECOND_GEN`: Cloud SQL database instance. `EXTERNAL`: A database server that is not managed by Google. This property is read-only; use the `tier` property in the `settings` object to determine the database type.
  "connectionName": "A String", # Connection name of the Cloud SQL instance used in connection strings.
  "createTime": "A String", # Output only. The time when the instance was created in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "currentDiskSize": "A String", # The current disk usage of the instance in bytes. This property has been deprecated. Use the "cloudsql.googleapis.com/database/disk/bytes_used" metric in Cloud Monitoring API instead. Please see [this announcement](https://groups.google.com/d/msg/google-cloud-sql-announce/I_7-F9EBhT0/BtvFtdFeAgAJ) for details.
  "databaseInstalledVersion": "A String", # Output only. Stores the current database version running on the instance including minor version such as `MYSQL_8_0_18`.
  "databaseVersion": "A String", # The database engine type and version. The `databaseVersion` field cannot be changed after instance creation.
  "diskEncryptionConfiguration": { # Disk encryption configuration for an instance. # Disk encryption configuration specific to an instance.
    "kind": "A String", # This is always `sql#diskEncryptionConfiguration`.
    "kmsKeyName": "A String", # Resource name of KMS key for disk encryption
  },
  "diskEncryptionStatus": { # Disk encryption status for an instance. # Disk encryption status specific to an instance.
    "kind": "A String", # This is always `sql#diskEncryptionStatus`.
    "kmsKeyVersionName": "A String", # KMS key version used to encrypt the Cloud SQL instance resource
  },
  "dnsName": "A String", # Output only. The dns name of the instance.
  "etag": "A String", # This field is deprecated and will be removed from a future version of the API. Use the `settings.settingsVersion` field instead.
  "failoverReplica": { # The name and status of the failover replica.
    "available": True or False, # The availability status of the failover replica. A false status indicates that the failover replica is out of sync. The primary instance can only failover to the failover replica when the status is true.
    "name": "A String", # The name of the failover replica. If specified at instance creation, a failover replica is created for the instance. The name doesn't include the project ID.
  },
  "gceZone": "A String", # The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance.
  "geminiConfig": { # Gemini instance configuration. # Gemini instance configuration.
    "activeQueryEnabled": True or False, # Output only. Whether the active query is enabled.
    "entitled": True or False, # Output only. Whether Gemini is enabled.
    "flagRecommenderEnabled": True or False, # Output only. Whether the flag recommender is enabled.
    "googleVacuumMgmtEnabled": True or False, # Output only. Whether the vacuum management is enabled.
    "indexAdvisorEnabled": True or False, # Output only. Whether the index advisor is enabled.
    "oomSessionCancelEnabled": True or False, # Output only. Whether canceling the out-of-memory (OOM) session is enabled.
  },
  "instanceType": "A String", # The instance type.
  "ipAddresses": [ # The assigned IP addresses for the instance.
    { # Database instance IP mapping
      "ipAddress": "A String", # The IP address assigned.
      "timeToRetire": "A String", # The due time for this IP to be retired in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`. This field is only available when the IP is scheduled to be retired.
      "type": "A String", # The type of this IP address. A `PRIMARY` address is a public address that can accept incoming connections. A `PRIVATE` address is a private address that can accept incoming connections. An `OUTGOING` address is the source address of connections originating from the instance, if supported.
    },
  ],
  "ipv6Address": "A String", # The IPv6 address assigned to the instance. (Deprecated) This property was applicable only to First Generation instances.
  "kind": "A String", # This is always `sql#instance`.
  "maintenanceVersion": "A String", # The current software version on the instance.
  "masterInstanceName": "A String", # The name of the instance which will act as primary in the replication setup.
  "maxDiskSize": "A String", # The maximum disk size of the instance in bytes.
  "name": "A String", # Name of the Cloud SQL instance. This does not include the project ID.
  "onPremisesConfiguration": { # On-premises instance configuration. # Configuration specific to on-premises instances.
    "caCertificate": "A String", # PEM representation of the trusted CA's x509 certificate.
    "clientCertificate": "A String", # PEM representation of the replica's x509 certificate.
    "clientKey": "A String", # PEM representation of the replica's private key. The corresponsing public key is encoded in the client's certificate.
    "dumpFilePath": "A String", # The dump file to create the Cloud SQL replica.
    "hostPort": "A String", # The host and port of the on-premises instance in host:port format
    "kind": "A String", # This is always `sql#onPremisesConfiguration`.
    "password": "A String", # The password for connecting to on-premises instance.
    "selectedObjects": [ # Optional. A list of objects that the user selects for replication from an external source instance.
      { # A list of objects that the user selects for replication from an external source instance.
        "database": "A String", # Required. The name of the database to migrate.
      },
    ],
    "sourceInstance": { # Reference to another Cloud SQL instance. # The reference to Cloud SQL instance if the source is Cloud SQL.
      "name": "A String", # The name of the Cloud SQL instance being referenced. This does not include the project ID.
      "project": "A String", # The project ID of the Cloud SQL instance being referenced. The default is the same project ID as the instance references it.
      "region": "A String", # The region of the Cloud SQL instance being referenced.
    },
    "sslOption": "A String", # Optional. SslOption for replica connection to the on-premises source.
    "username": "A String", # The username for connecting to on-premises instance.
  },
  "outOfDiskReport": { # This message wraps up the information written by out-of-disk detection job. # This field represents the report generated by the proactive database wellness job for OutOfDisk issues. * Writers: * the proactive database wellness job for OOD. * Readers: * the proactive database wellness job
    "sqlMinRecommendedIncreaseSizeGb": 42, # The minimum recommended increase size in GigaBytes This field is consumed by the frontend * Writers: * the proactive database wellness job for OOD. * Readers:
    "sqlOutOfDiskState": "A String", # This field represents the state generated by the proactive database wellness job for OutOfDisk issues. * Writers: * the proactive database wellness job for OOD. * Readers: * the proactive database wellness job
  },
  "primaryDnsName": "A String", # Output only. DEPRECATED: please use write_endpoint instead.
  "project": "A String", # The project ID of the project containing the Cloud SQL instance. The Google apps domain is prefixed if applicable.
  "pscServiceAttachmentLink": "A String", # Output only. The link to service attachment of PSC instance.
  "region": "A String", # The geographical region of the Cloud SQL instance. It can be one of the [regions](https://cloud.google.com/sql/docs/mysql/locations#location-r) where Cloud SQL operates: For example, `asia-east1`, `europe-west1`, and `us-central1`. The default value is `us-central1`.
  "replicaConfiguration": { # Read-replica configuration for connecting to the primary instance. # Configuration specific to failover replicas and read replicas.
    "cascadableReplica": True or False, # Optional. Specifies if a SQL Server replica is a cascadable replica. A cascadable replica is a SQL Server cross region replica that supports replica(s) under it.
    "failoverTarget": True or False, # Specifies if the replica is the failover target. If the field is set to `true` the replica will be designated as a failover replica. In case the primary instance fails, the replica instance will be promoted as the new primary instance. Only one replica can be specified as failover target, and the replica has to be in different zone with the primary instance.
    "kind": "A String", # This is always `sql#replicaConfiguration`.
    "mysqlReplicaConfiguration": { # Read-replica configuration specific to MySQL databases. # MySQL specific configuration when replicating from a MySQL on-premises primary instance. Replication configuration information such as the username, password, certificates, and keys are not stored in the instance metadata. The configuration information is used only to set up the replication connection and is stored by MySQL in a file named `master.info` in the data directory.
      "caCertificate": "A String", # PEM representation of the trusted CA's x509 certificate.
      "clientCertificate": "A String", # PEM representation of the replica's x509 certificate.
      "clientKey": "A String", # PEM representation of the replica's private key. The corresponsing public key is encoded in the client's certificate.
      "connectRetryInterval": 42, # Seconds to wait between connect retries. MySQL's default is 60 seconds.
      "dumpFilePath": "A String", # Path to a SQL dump file in Google Cloud Storage from which the replica instance is to be created. The URI is in the form gs://bucketName/fileName. Compressed gzip files (.gz) are also supported. Dumps have the binlog co-ordinates from which replication begins. This can be accomplished by setting --master-data to 1 when using mysqldump.
      "kind": "A String", # This is always `sql#mysqlReplicaConfiguration`.
      "masterHeartbeatPeriod": "A String", # Interval in milliseconds between replication heartbeats.
      "password": "A String", # The password for the replication connection.
      "sslCipher": "A String", # A list of permissible ciphers to use for SSL encryption.
      "username": "A String", # The username for the replication connection.
      "verifyServerCertificate": True or False, # Whether or not to check the primary instance's Common Name value in the certificate that it sends during the SSL handshake.
    },
  },
  "replicaNames": [ # The replicas of the instance.
    "A String",
  ],
  "replicationCluster": { # A primary instance and disaster recovery (DR) replica pair. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure. Only applicable to MySQL. # A primary instance and disaster recovery (DR) replica pair. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance experiences regional failure. Only applicable to MySQL.
    "drReplica": True or False, # Output only. Read-only field that indicates whether the replica is a DR replica. This field is not set if the instance is a primary instance.
    "failoverDrReplicaName": "A String", # Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Set this field to a replica name to designate a DR replica for a primary instance. Remove the replica name to remove the DR replica designation.
    "psaWriteEndpoint": "A String", # Output only. If set, it indicates this instance has a private service access (PSA) dns endpoint that is pointing to the primary instance of the cluster. If this instance is the primary, the dns should be pointing to this instance. After Switchover or Replica failover, this DNS endpoint points to the promoted instance. This is a read-only field, returned to the user as information. This field can exist even if a standalone instance does not yet have a replica, or had a DR replica that was deleted.
  },
  "rootPassword": "A String", # Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances.
  "satisfiesPzi": True or False, # Output only. This status indicates whether the instance satisfies PZI. The status is reserved for future use.
  "satisfiesPzs": True or False, # This status indicates whether the instance satisfies PZS. The status is reserved for future use.
  "scheduledMaintenance": { # Any scheduled maintenance for this instance. # The start time of any upcoming scheduled maintenance for this instance.
    "canDefer": True or False,
    "canReschedule": True or False, # If the scheduled maintenance can be rescheduled.
    "scheduleDeadlineTime": "A String", # Maintenance cannot be rescheduled to start beyond this deadline.
    "startTime": "A String", # The start time of any upcoming scheduled maintenance for this instance.
  },
  "secondaryGceZone": "A String", # The Compute Engine zone that the failover instance is currently serving from for a regional instance. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary/failover zone.
  "selfLink": "A String", # The URI of this resource.
  "serverCaCert": { # SslCerts Resource # SSL configuration.
    "cert": "A String", # PEM representation.
    "certSerialNumber": "A String", # Serial number, as extracted from the certificate.
    "commonName": "A String", # User supplied name. Constrained to [a-zA-Z.-_ ]+.
    "createTime": "A String", # The time when the certificate was created in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
    "expirationTime": "A String", # The time when the certificate expires in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
    "instance": "A String", # Name of the database instance.
    "kind": "A String", # This is always `sql#sslCert`.
    "selfLink": "A String", # The URI of this resource.
    "sha1Fingerprint": "A String", # Sha1 Fingerprint.
  },
  "serviceAccountEmailAddress": "A String", # The service account email address assigned to the instance. \This property is read-only.
  "settings": { # Database instance settings. # The user settings.
    "activationPolicy": "A String", # The activation policy specifies when the instance is activated; it is applicable only when the instance state is RUNNABLE. Valid values: * `ALWAYS`: The instance is on, and remains so even in the absence of connection requests. * `NEVER`: The instance is off; it is not activated, even if a connection request arrives.
    "activeDirectoryConfig": { # Active Directory configuration, relevant only for Cloud SQL for SQL Server. # Active Directory configuration, relevant only for Cloud SQL for SQL Server.
      "domain": "A String", # The name of the domain (e.g., mydomain.com).
      "kind": "A String", # This is always sql#activeDirectoryConfig.
    },
    "advancedMachineFeatures": { # Specifies options for controlling advanced machine features. # Specifies advanced machine configuration for the instances relevant only for SQL Server.
      "threadsPerCore": 42, # The number of threads per physical core.
    },
    "authorizedGaeApplications": [ # The App Engine app IDs that can access this instance. (Deprecated) Applied to First Generation instances only.
      "A String",
    ],
    "availabilityType": "A String", # Availability type. Potential values: * `ZONAL`: The instance serves data from only one zone. Outages in that zone affect data accessibility. * `REGIONAL`: The instance can serve data from more than one zone in a region (it is highly available)./ For more information, see [Overview of the High Availability Configuration](https://cloud.google.com/sql/docs/mysql/high-availability).
    "backupConfiguration": { # Database instance backup configuration. # The daily backup configuration for the instance.
      "backupRetentionSettings": { # We currently only support backup retention by specifying the number of backups we will retain. # Backup retention settings.
        "retainedBackups": 42, # Depending on the value of retention_unit, this is used to determine if a backup needs to be deleted. If retention_unit is 'COUNT', we will retain this many backups.
        "retentionUnit": "A String", # The unit that 'retained_backups' represents.
      },
      "binaryLogEnabled": True or False, # (MySQL only) Whether binary log is enabled. If backup configuration is disabled, binarylog must be disabled as well.
      "enabled": True or False, # Whether this configuration is enabled.
      "kind": "A String", # This is always `sql#backupConfiguration`.
      "location": "A String", # Location of the backup
      "pointInTimeRecoveryEnabled": True or False, # Whether point in time recovery is enabled.
      "replicationLogArchivingEnabled": True or False, # Reserved for future use.
      "startTime": "A String", # Start time for the daily backup configuration in UTC timezone in the 24 hour format - `HH:MM`.
      "transactionLogRetentionDays": 42, # The number of days of transaction logs we retain for point in time restore, from 1-7.
      "transactionalLogStorageState": "A String", # Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery.
    },
    "collation": "A String", # The name of server Instance collation.
    "connectorEnforcement": "A String", # Specifies if connections must use Cloud SQL connectors. Option values include the following: `NOT_REQUIRED` (Cloud SQL instances can be connected without Cloud SQL Connectors) and `REQUIRED` (Only allow connections that use Cloud SQL Connectors) Note that using REQUIRED disables all existing authorized networks. If this field is not specified when creating a new instance, NOT_REQUIRED is used. If this field is not specified when patching or updating an existing instance, it is left unchanged in the instance.
    "crashSafeReplicationEnabled": True or False, # Configuration specific to read replica instances. Indicates whether database flags for crash-safe replication are enabled. This property was only applicable to First Generation instances.
    "dataCacheConfig": { # Data cache configurations. # Configuration for data cache.
      "dataCacheEnabled": True or False, # Whether data cache is enabled for the instance.
    },
    "dataDiskSizeGb": "A String", # The size of data disk, in GB. The data disk size minimum is 10GB.
    "dataDiskType": "A String", # The type of data disk: `PD_SSD` (default) or `PD_HDD`. Not used for First Generation instances.
    "databaseFlags": [ # The database flags passed to the instance at startup.
      { # Database flags for Cloud SQL instances.
        "name": "A String", # The name of the flag. These flags are passed at instance startup, so include both server options and system variables. Flags are specified with underscores, not hyphens. For more information, see [Configuring Database Flags](https://cloud.google.com/sql/docs/mysql/flags) in the Cloud SQL documentation.
        "value": "A String", # The value of the flag. Boolean flags are set to `on` for true and `off` for false. This field must be omitted if the flag doesn't take a value.
      },
    ],
    "databaseReplicationEnabled": True or False, # Configuration specific to read replica instances. Indicates whether replication is enabled or not. WARNING: Changing this restarts the instance.
    "deletionProtectionEnabled": True or False, # Configuration to protect against accidental instance deletion.
    "denyMaintenancePeriods": [ # Deny maintenance periods
      { # Deny Maintenance Periods. This specifies a date range during when all CSA rollout will be denied.
        "endDate": "A String", # "deny maintenance period" end date. If the year of the end date is empty, the year of the start date also must be empty. In this case, it means the deny maintenance period recurs every year. The date is in format yyyy-mm-dd i.e., 2020-11-01, or mm-dd, i.e., 11-01
        "startDate": "A String", # "deny maintenance period" start date. If the year of the start date is empty, the year of the end date also must be empty. In this case, it means the deny maintenance period recurs every year. The date is in format yyyy-mm-dd i.e., 2020-11-01, or mm-dd, i.e., 11-01
        "time": "A String", # Time in UTC when the "deny maintenance period" starts on start_date and ends on end_date. The time is in format: HH:mm:SS, i.e., 00:00:00
      },
    ],
    "edition": "A String", # Optional. The edition of the instance.
    "enableDataplexIntegration": True or False, # Optional. By default, Cloud SQL instances have schema extraction disabled for Dataplex. When this parameter is set to true, schema extraction for Dataplex on Cloud SQL instances is activated.
    "enableGoogleMlIntegration": True or False, # Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances.
    "insightsConfig": { # Insights configuration. This specifies when Cloud SQL Insights feature is enabled and optional configuration. # Insights configuration, for now relevant only for Postgres.
      "queryInsightsEnabled": True or False, # Whether Query Insights feature is enabled.
      "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. Default is 5.
      "queryStringLength": 42, # Maximum query length stored in bytes. Default value: 1024 bytes. Range: 256-4500 bytes. Query length more than this field value will be truncated to this value. When unset, query length will be the default value. Changing query length will restart the database.
      "recordApplicationTags": True or False, # Whether Query Insights will record application tags from query when enabled.
      "recordClientAddress": True or False, # Whether Query Insights will record client address when enabled.
    },
    "ipConfiguration": { # IP Management configuration. # The settings for IP Management. This allows to enable or disable the instance IP and manage which external networks can connect to the instance. The IPv4 address cannot be disabled for Second Generation instances.
      "allocatedIpRange": "A String", # The name of the allocated ip range for the private ip Cloud SQL instance. For example: "google-managed-services-default". If set, the instance ip will be created in the allocated range. The range name must comply with [RFC 1035](https://tools.ietf.org/html/rfc1035). Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?.`
      "authorizedNetworks": [ # The list of external networks that are allowed to connect to the instance using the IP. In 'CIDR' notation, also known as 'slash' notation (for example: `157.197.200.0/24`).
        { # An entry for an Access Control list.
          "expirationTime": "A String", # The time when this access control entry expires in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
          "kind": "A String", # This is always `sql#aclEntry`.
          "name": "A String", # Optional. A label to identify this entry.
          "value": "A String", # The allowlisted value for the access control list.
        },
      ],
      "enablePrivatePathForGoogleCloudServices": True or False, # Controls connectivity to private IP instances from Google services, such as BigQuery.
      "ipv4Enabled": True or False, # Whether the instance is assigned a public IP address or not.
      "privateNetwork": "A String", # The resource link for the VPC network from which the Cloud SQL instance is accessible for private IP. For example, `/projects/myProject/global/networks/default`. This setting can be updated, but it cannot be removed after it is set.
      "pscConfig": { # PSC settings for a Cloud SQL instance. # PSC settings for this instance.
        "allowedConsumerProjects": [ # Optional. The list of consumer projects that are allow-listed for PSC connections to this instance. This instance can be connected to with PSC from any network in these projects. Each consumer project in this list may be represented by a project number (numeric) or by a project id (alphanumeric).
          "A String",
        ],
        "pscAutoConnections": [ # Optional. The list of settings for requested Private Service Connect consumer endpoints that can be used to connect to this Cloud SQL instance.
          { # Settings for an automatically-setup Private Service Connect consumer endpoint that is used to connect to a Cloud SQL instance.
            "consumerNetwork": "A String", # The consumer network of this consumer endpoint. This must be a resource path that includes both the host project and the network name. For example, `projects/project1/global/networks/network1`. The consumer host project of this network might be different from the consumer service project.
            "consumerNetworkStatus": "A String", # The connection policy status of the consumer network.
            "consumerProject": "A String", # This is the project ID of consumer service project of this consumer endpoint. Optional. This is only applicable if consumer_network is a shared vpc network.
            "ipAddress": "A String", # The IP address of the consumer endpoint.
            "status": "A String", # The connection status of the consumer endpoint.
          },
        ],
        "pscEnabled": True or False, # Whether PSC connectivity is enabled for this instance.
      },
      "requireSsl": True or False, # Use `ssl_mode` instead. Whether SSL/TLS connections over IP are enforced. If set to false, then allow both non-SSL/non-TLS and SSL/TLS connections. For SSL/TLS connections, the client certificate won't be verified. If set to true, then only allow connections encrypted with SSL/TLS and with valid client certificates. If you want to enforce SSL/TLS without enforcing the requirement for valid client certificates, then use the `ssl_mode` flag instead of the legacy `require_ssl` flag.
      "serverCaMode": "A String", # Specify what type of CA is used for the server certificate.
      "sslMode": "A String", # Specify how SSL/TLS is enforced in database connections. If you must use the `require_ssl` flag for backward compatibility, then only the following value pairs are valid: For PostgreSQL and MySQL: * `ssl_mode=ALLOW_UNENCRYPTED_AND_ENCRYPTED` and `require_ssl=false` * `ssl_mode=ENCRYPTED_ONLY` and `require_ssl=false` * `ssl_mode=TRUSTED_CLIENT_CERTIFICATE_REQUIRED` and `require_ssl=true` For SQL Server: * `ssl_mode=ALLOW_UNENCRYPTED_AND_ENCRYPTED` and `require_ssl=false` * `ssl_mode=ENCRYPTED_ONLY` and `require_ssl=true` The value of `ssl_mode` has priority over the value of `require_ssl`. For example, for the pair `ssl_mode=ENCRYPTED_ONLY` and `require_ssl=false`, `ssl_mode=ENCRYPTED_ONLY` means accept only SSL connections, while `require_ssl=false` means accept both non-SSL and SSL connections. In this case, MySQL and PostgreSQL databases respect `ssl_mode` and accepts only SSL connections.
    },
    "kind": "A String", # This is always `sql#settings`.
    "locationPreference": { # Preferred location. This specifies where a Cloud SQL instance is located. Note that if the preferred location is not available, the instance will be located as close as possible within the region. Only one location may be specified. # The location preference settings. This allows the instance to be located as near as possible to either an App Engine app or Compute Engine zone for better performance. App Engine co-location was only applicable to First Generation instances.
      "followGaeApplication": "A String", # The App Engine application to follow, it must be in the same region as the Cloud SQL instance. WARNING: Changing this might restart the instance.
      "kind": "A String", # This is always `sql#locationPreference`.
      "secondaryZone": "A String", # The preferred Compute Engine zone for the secondary/failover (for example: us-central1-a, us-central1-b, etc.). To disable this field, set it to 'no_secondary_zone'.
      "zone": "A String", # The preferred Compute Engine zone (for example: us-central1-a, us-central1-b, etc.). WARNING: Changing this might restart the instance.
    },
    "maintenanceWindow": { # Maintenance window. This specifies when a Cloud SQL instance is restarted for system maintenance purposes. # The maintenance window for this instance. This specifies when the instance can be restarted for maintenance purposes.
      "day": 42, # Day of week - `MONDAY`, `TUESDAY`, `WEDNESDAY`, `THURSDAY`, `FRIDAY`, `SATURDAY`, or `SUNDAY`. Specify in the UTC time zone. Returned in output as an integer, 1 to 7, where `1` equals Monday.
      "hour": 42, # Hour of day - 0 to 23. Specify in the UTC time zone.
      "kind": "A String", # This is always `sql#maintenanceWindow`.
      "updateTrack": "A String", # Maintenance timing settings: `canary`, `stable`, or `week5`. For more information, see [About maintenance on Cloud SQL instances](https://cloud.google.com/sql/docs/mysql/maintenance).
    },
    "passwordValidationPolicy": { # Database instance local user password validation policy # The local user password validation policy of the instance.
      "complexity": "A String", # The complexity of the password.
      "disallowCompromisedCredentials": True or False, # This field is deprecated and will be removed in a future version of the API.
      "disallowUsernameSubstring": True or False, # Disallow username as a part of the password.
      "enablePasswordPolicy": True or False, # Whether the password policy is enabled or not.
      "minLength": 42, # Minimum number of characters allowed.
      "passwordChangeInterval": "A String", # Minimum interval after which the password can be changed. This flag is only supported for PostgreSQL.
      "reuseInterval": 42, # Number of previous passwords that cannot be reused.
    },
    "pricingPlan": "A String", # The pricing plan for this instance. This can be either `PER_USE` or `PACKAGE`. Only `PER_USE` is supported for Second Generation instances.
    "replicationType": "A String", # The type of replication this instance uses. This can be either `ASYNCHRONOUS` or `SYNCHRONOUS`. (Deprecated) This property was only applicable to First Generation instances.
    "settingsVersion": "A String", # The version of instance settings. This is a required field for update method to make sure concurrent updates are handled properly. During update, use the most recent settingsVersion value for this instance and do not try to update this value.
    "sqlServerAuditConfig": { # SQL Server specific audit configuration. # SQL Server specific audit configuration.
      "bucket": "A String", # The name of the destination bucket (e.g., gs://mybucket).
      "kind": "A String", # This is always sql#sqlServerAuditConfig
      "retentionInterval": "A String", # How long to keep generated audit files.
      "uploadInterval": "A String", # How often to upload generated audit files.
    },
    "storageAutoResize": True or False, # Configuration to increase storage size automatically. The default value is true.
    "storageAutoResizeLimit": "A String", # The maximum size to which storage capacity can be automatically increased. The default value is 0, which specifies that there is no limit.
    "tier": "A String", # The tier (or machine type) for this instance, for example `db-custom-1-3840`. WARNING: Changing this restarts the instance.
    "timeZone": "A String", # Server timezone, relevant only for Cloud SQL for SQL Server.
    "userLabels": { # User-provided labels, represented as a dictionary where each label is a single key value pair.
      "a_key": "A String",
    },
  },
  "sqlNetworkArchitecture": "A String", # The SQL network architecture for the instance.
  "state": "A String", # The current serving state of the Cloud SQL instance.
  "suspensionReason": [ # If the instance state is SUSPENDED, the reason for the suspension.
    "A String",
  ],
  "switchTransactionLogsToCloudStorageEnabled": True or False, # Input only. Whether Cloud SQL is enabled to switch storing point-in-time recovery log files from a data disk to Cloud Storage.
  "upgradableDatabaseVersions": [ # Output only. All database versions that are available for upgrade.
    { # An available database version. It can be a major or a minor version.
      "displayName": "A String", # The database version's display name.
      "majorVersion": "A String", # The version's major version name.
      "name": "A String", # The database version name. For MySQL 8.0, this string provides the database major and minor version.
    },
  ],
  "writeEndpoint": "A String", # Output only. The dns name of the primary instance in a replication group.
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
list(project, filter=None, maxResults=None, pageToken=None, x__xgafv=None)
Lists instances under a given project.

Args:
  project: string, Project ID of the project for which to list Cloud SQL instances. (required)
  filter: string, A filter expression that filters resources listed in the response. The expression is in the form of field:value. For example, 'instanceType:CLOUD_SQL_INSTANCE'. Fields can be nested as needed as per their JSON representation, such as 'settings.userLabels.auto_start:true'. Multiple filter queries are space-separated. For example. 'state:RUNNABLE instanceType:CLOUD_SQL_INSTANCE'. By default, each expression is an AND expression. However, you can include AND and OR expressions explicitly.
  maxResults: integer, The maximum number of instances to return. The service may return fewer than this value. If unspecified, at most 500 instances are returned. The maximum value is 1000; values above 1000 are coerced to 1000.
  pageToken: string, A previously-returned page token representing part of the larger set of results to view.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Database instances list response.
  "items": [ # List of database instance resources.
    { # A Cloud SQL instance resource.
      "availableMaintenanceVersions": [ # Output only. List all maintenance versions applicable on the instance
        "A String",
      ],
      "backendType": "A String", # The backend type. `SECOND_GEN`: Cloud SQL database instance. `EXTERNAL`: A database server that is not managed by Google. This property is read-only; use the `tier` property in the `settings` object to determine the database type.
      "connectionName": "A String", # Connection name of the Cloud SQL instance used in connection strings.
      "createTime": "A String", # Output only. The time when the instance was created in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
      "currentDiskSize": "A String", # The current disk usage of the instance in bytes. This property has been deprecated. Use the "cloudsql.googleapis.com/database/disk/bytes_used" metric in Cloud Monitoring API instead. Please see [this announcement](https://groups.google.com/d/msg/google-cloud-sql-announce/I_7-F9EBhT0/BtvFtdFeAgAJ) for details.
      "databaseInstalledVersion": "A String", # Output only. Stores the current database version running on the instance including minor version such as `MYSQL_8_0_18`.
      "databaseVersion": "A String", # The database engine type and version. The `databaseVersion` field cannot be changed after instance creation.
      "diskEncryptionConfiguration": { # Disk encryption configuration for an instance. # Disk encryption configuration specific to an instance.
        "kind": "A String", # This is always `sql#diskEncryptionConfiguration`.
        "kmsKeyName": "A String", # Resource name of KMS key for disk encryption
      },
      "diskEncryptionStatus": { # Disk encryption status for an instance. # Disk encryption status specific to an instance.
        "kind": "A String", # This is always `sql#diskEncryptionStatus`.
        "kmsKeyVersionName": "A String", # KMS key version used to encrypt the Cloud SQL instance resource
      },
      "dnsName": "A String", # Output only. The dns name of the instance.
      "etag": "A String", # This field is deprecated and will be removed from a future version of the API. Use the `settings.settingsVersion` field instead.
      "failoverReplica": { # The name and status of the failover replica.
        "available": True or False, # The availability status of the failover replica. A false status indicates that the failover replica is out of sync. The primary instance can only failover to the failover replica when the status is true.
        "name": "A String", # The name of the failover replica. If specified at instance creation, a failover replica is created for the instance. The name doesn't include the project ID.
      },
      "gceZone": "A String", # The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance.
      "geminiConfig": { # Gemini instance configuration. # Gemini instance configuration.
        "activeQueryEnabled": True or False, # Output only. Whether the active query is enabled.
        "entitled": True or False, # Output only. Whether Gemini is enabled.
        "flagRecommenderEnabled": True or False, # Output only. Whether the flag recommender is enabled.
        "googleVacuumMgmtEnabled": True or False, # Output only. Whether the vacuum management is enabled.
        "indexAdvisorEnabled": True or False, # Output only. Whether the index advisor is enabled.
        "oomSessionCancelEnabled": True or False, # Output only. Whether canceling the out-of-memory (OOM) session is enabled.
      },
      "instanceType": "A String", # The instance type.
      "ipAddresses": [ # The assigned IP addresses for the instance.
        { # Database instance IP mapping
          "ipAddress": "A String", # The IP address assigned.
          "timeToRetire": "A String", # The due time for this IP to be retired in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`. This field is only available when the IP is scheduled to be retired.
          "type": "A String", # The type of this IP address. A `PRIMARY` address is a public address that can accept incoming connections. A `PRIVATE` address is a private address that can accept incoming connections. An `OUTGOING` address is the source address of connections originating from the instance, if supported.
        },
      ],
      "ipv6Address": "A String", # The IPv6 address assigned to the instance. (Deprecated) This property was applicable only to First Generation instances.
      "kind": "A String", # This is always `sql#instance`.
      "maintenanceVersion": "A String", # The current software version on the instance.
      "masterInstanceName": "A String", # The name of the instance which will act as primary in the replication setup.
      "maxDiskSize": "A String", # The maximum disk size of the instance in bytes.
      "name": "A String", # Name of the Cloud SQL instance. This does not include the project ID.
      "onPremisesConfiguration": { # On-premises instance configuration. # Configuration specific to on-premises instances.
        "caCertificate": "A String", # PEM representation of the trusted CA's x509 certificate.
        "clientCertificate": "A String", # PEM representation of the replica's x509 certificate.
        "clientKey": "A String", # PEM representation of the replica's private key. The corresponsing public key is encoded in the client's certificate.
        "dumpFilePath": "A String", # The dump file to create the Cloud SQL replica.
        "hostPort": "A String", # The host and port of the on-premises instance in host:port format
        "kind": "A String", # This is always `sql#onPremisesConfiguration`.
        "password": "A String", # The password for connecting to on-premises instance.
        "selectedObjects": [ # Optional. A list of objects that the user selects for replication from an external source instance.
          { # A list of objects that the user selects for replication from an external source instance.
            "database": "A String", # Required. The name of the database to migrate.
          },
        ],
        "sourceInstance": { # Reference to another Cloud SQL instance. # The reference to Cloud SQL instance if the source is Cloud SQL.
          "name": "A String", # The name of the Cloud SQL instance being referenced. This does not include the project ID.
          "project": "A String", # The project ID of the Cloud SQL instance being referenced. The default is the same project ID as the instance references it.
          "region": "A String", # The region of the Cloud SQL instance being referenced.
        },
        "sslOption": "A String", # Optional. SslOption for replica connection to the on-premises source.
        "username": "A String", # The username for connecting to on-premises instance.
      },
      "outOfDiskReport": { # This message wraps up the information written by out-of-disk detection job. # This field represents the report generated by the proactive database wellness job for OutOfDisk issues. * Writers: * the proactive database wellness job for OOD. * Readers: * the proactive database wellness job
        "sqlMinRecommendedIncreaseSizeGb": 42, # The minimum recommended increase size in GigaBytes This field is consumed by the frontend * Writers: * the proactive database wellness job for OOD. * Readers:
        "sqlOutOfDiskState": "A String", # This field represents the state generated by the proactive database wellness job for OutOfDisk issues. * Writers: * the proactive database wellness job for OOD. * Readers: * the proactive database wellness job
      },
      "primaryDnsName": "A String", # Output only. DEPRECATED: please use write_endpoint instead.
      "project": "A String", # The project ID of the project containing the Cloud SQL instance. The Google apps domain is prefixed if applicable.
      "pscServiceAttachmentLink": "A String", # Output only. The link to service attachment of PSC instance.
      "region": "A String", # The geographical region of the Cloud SQL instance. It can be one of the [regions](https://cloud.google.com/sql/docs/mysql/locations#location-r) where Cloud SQL operates: For example, `asia-east1`, `europe-west1`, and `us-central1`. The default value is `us-central1`.
      "replicaConfiguration": { # Read-replica configuration for connecting to the primary instance. # Configuration specific to failover replicas and read replicas.
        "cascadableReplica": True or False, # Optional. Specifies if a SQL Server replica is a cascadable replica. A cascadable replica is a SQL Server cross region replica that supports replica(s) under it.
        "failoverTarget": True or False, # Specifies if the replica is the failover target. If the field is set to `true` the replica will be designated as a failover replica. In case the primary instance fails, the replica instance will be promoted as the new primary instance. Only one replica can be specified as failover target, and the replica has to be in different zone with the primary instance.
        "kind": "A String", # This is always `sql#replicaConfiguration`.
        "mysqlReplicaConfiguration": { # Read-replica configuration specific to MySQL databases. # MySQL specific configuration when replicating from a MySQL on-premises primary instance. Replication configuration information such as the username, password, certificates, and keys are not stored in the instance metadata. The configuration information is used only to set up the replication connection and is stored by MySQL in a file named `master.info` in the data directory.
          "caCertificate": "A String", # PEM representation of the trusted CA's x509 certificate.
          "clientCertificate": "A String", # PEM representation of the replica's x509 certificate.
          "clientKey": "A String", # PEM representation of the replica's private key. The corresponsing public key is encoded in the client's certificate.
          "connectRetryInterval": 42, # Seconds to wait between connect retries. MySQL's default is 60 seconds.
          "dumpFilePath": "A String", # Path to a SQL dump file in Google Cloud Storage from which the replica instance is to be created. The URI is in the form gs://bucketName/fileName. Compressed gzip files (.gz) are also supported. Dumps have the binlog co-ordinates from which replication begins. This can be accomplished by setting --master-data to 1 when using mysqldump.
          "kind": "A String", # This is always `sql#mysqlReplicaConfiguration`.
          "masterHeartbeatPeriod": "A String", # Interval in milliseconds between replication heartbeats.
          "password": "A String", # The password for the replication connection.
          "sslCipher": "A String", # A list of permissible ciphers to use for SSL encryption.
          "username": "A String", # The username for the replication connection.
          "verifyServerCertificate": True or False, # Whether or not to check the primary instance's Common Name value in the certificate that it sends during the SSL handshake.
        },
      },
      "replicaNames": [ # The replicas of the instance.
        "A String",
      ],
      "replicationCluster": { # A primary instance and disaster recovery (DR) replica pair. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure. Only applicable to MySQL. # A primary instance and disaster recovery (DR) replica pair. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance experiences regional failure. Only applicable to MySQL.
        "drReplica": True or False, # Output only. Read-only field that indicates whether the replica is a DR replica. This field is not set if the instance is a primary instance.
        "failoverDrReplicaName": "A String", # Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Set this field to a replica name to designate a DR replica for a primary instance. Remove the replica name to remove the DR replica designation.
        "psaWriteEndpoint": "A String", # Output only. If set, it indicates this instance has a private service access (PSA) dns endpoint that is pointing to the primary instance of the cluster. If this instance is the primary, the dns should be pointing to this instance. After Switchover or Replica failover, this DNS endpoint points to the promoted instance. This is a read-only field, returned to the user as information. This field can exist even if a standalone instance does not yet have a replica, or had a DR replica that was deleted.
      },
      "rootPassword": "A String", # Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances.
      "satisfiesPzi": True or False, # Output only. This status indicates whether the instance satisfies PZI. The status is reserved for future use.
      "satisfiesPzs": True or False, # This status indicates whether the instance satisfies PZS. The status is reserved for future use.
      "scheduledMaintenance": { # Any scheduled maintenance for this instance. # The start time of any upcoming scheduled maintenance for this instance.
        "canDefer": True or False,
        "canReschedule": True or False, # If the scheduled maintenance can be rescheduled.
        "scheduleDeadlineTime": "A String", # Maintenance cannot be rescheduled to start beyond this deadline.
        "startTime": "A String", # The start time of any upcoming scheduled maintenance for this instance.
      },
      "secondaryGceZone": "A String", # The Compute Engine zone that the failover instance is currently serving from for a regional instance. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary/failover zone.
      "selfLink": "A String", # The URI of this resource.
      "serverCaCert": { # SslCerts Resource # SSL configuration.
        "cert": "A String", # PEM representation.
        "certSerialNumber": "A String", # Serial number, as extracted from the certificate.
        "commonName": "A String", # User supplied name. Constrained to [a-zA-Z.-_ ]+.
        "createTime": "A String", # The time when the certificate was created in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
        "expirationTime": "A String", # The time when the certificate expires in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
        "instance": "A String", # Name of the database instance.
        "kind": "A String", # This is always `sql#sslCert`.
        "selfLink": "A String", # The URI of this resource.
        "sha1Fingerprint": "A String", # Sha1 Fingerprint.
      },
      "serviceAccountEmailAddress": "A String", # The service account email address assigned to the instance. \This property is read-only.
      "settings": { # Database instance settings. # The user settings.
        "activationPolicy": "A String", # The activation policy specifies when the instance is activated; it is applicable only when the instance state is RUNNABLE. Valid values: * `ALWAYS`: The instance is on, and remains so even in the absence of connection requests. * `NEVER`: The instance is off; it is not activated, even if a connection request arrives.
        "activeDirectoryConfig": { # Active Directory configuration, relevant only for Cloud SQL for SQL Server. # Active Directory configuration, relevant only for Cloud SQL for SQL Server.
          "domain": "A String", # The name of the domain (e.g., mydomain.com).
          "kind": "A String", # This is always sql#activeDirectoryConfig.
        },
        "advancedMachineFeatures": { # Specifies options for controlling advanced machine features. # Specifies advanced machine configuration for the instances relevant only for SQL Server.
          "threadsPerCore": 42, # The number of threads per physical core.
        },
        "authorizedGaeApplications": [ # The App Engine app IDs that can access this instance. (Deprecated) Applied to First Generation instances only.
          "A String",
        ],
        "availabilityType": "A String", # Availability type. Potential values: * `ZONAL`: The instance serves data from only one zone. Outages in that zone affect data accessibility. * `REGIONAL`: The instance can serve data from more than one zone in a region (it is highly available)./ For more information, see [Overview of the High Availability Configuration](https://cloud.google.com/sql/docs/mysql/high-availability).
        "backupConfiguration": { # Database instance backup configuration. # The daily backup configuration for the instance.
          "backupRetentionSettings": { # We currently only support backup retention by specifying the number of backups we will retain. # Backup retention settings.
            "retainedBackups": 42, # Depending on the value of retention_unit, this is used to determine if a backup needs to be deleted. If retention_unit is 'COUNT', we will retain this many backups.
            "retentionUnit": "A String", # The unit that 'retained_backups' represents.
          },
          "binaryLogEnabled": True or False, # (MySQL only) Whether binary log is enabled. If backup configuration is disabled, binarylog must be disabled as well.
          "enabled": True or False, # Whether this configuration is enabled.
          "kind": "A String", # This is always `sql#backupConfiguration`.
          "location": "A String", # Location of the backup
          "pointInTimeRecoveryEnabled": True or False, # Whether point in time recovery is enabled.
          "replicationLogArchivingEnabled": True or False, # Reserved for future use.
          "startTime": "A String", # Start time for the daily backup configuration in UTC timezone in the 24 hour format - `HH:MM`.
          "transactionLogRetentionDays": 42, # The number of days of transaction logs we retain for point in time restore, from 1-7.
          "transactionalLogStorageState": "A String", # Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery.
        },
        "collation": "A String", # The name of server Instance collation.
        "connectorEnforcement": "A String", # Specifies if connections must use Cloud SQL connectors. Option values include the following: `NOT_REQUIRED` (Cloud SQL instances can be connected without Cloud SQL Connectors) and `REQUIRED` (Only allow connections that use Cloud SQL Connectors) Note that using REQUIRED disables all existing authorized networks. If this field is not specified when creating a new instance, NOT_REQUIRED is used. If this field is not specified when patching or updating an existing instance, it is left unchanged in the instance.
        "crashSafeReplicationEnabled": True or False, # Configuration specific to read replica instances. Indicates whether database flags for crash-safe replication are enabled. This property was only applicable to First Generation instances.
        "dataCacheConfig": { # Data cache configurations. # Configuration for data cache.
          "dataCacheEnabled": True or False, # Whether data cache is enabled for the instance.
        },
        "dataDiskSizeGb": "A String", # The size of data disk, in GB. The data disk size minimum is 10GB.
        "dataDiskType": "A String", # The type of data disk: `PD_SSD` (default) or `PD_HDD`. Not used for First Generation instances.
        "databaseFlags": [ # The database flags passed to the instance at startup.
          { # Database flags for Cloud SQL instances.
            "name": "A String", # The name of the flag. These flags are passed at instance startup, so include both server options and system variables. Flags are specified with underscores, not hyphens. For more information, see [Configuring Database Flags](https://cloud.google.com/sql/docs/mysql/flags) in the Cloud SQL documentation.
            "value": "A String", # The value of the flag. Boolean flags are set to `on` for true and `off` for false. This field must be omitted if the flag doesn't take a value.
          },
        ],
        "databaseReplicationEnabled": True or False, # Configuration specific to read replica instances. Indicates whether replication is enabled or not. WARNING: Changing this restarts the instance.
        "deletionProtectionEnabled": True or False, # Configuration to protect against accidental instance deletion.
        "denyMaintenancePeriods": [ # Deny maintenance periods
          { # Deny Maintenance Periods. This specifies a date range during when all CSA rollout will be denied.
            "endDate": "A String", # "deny maintenance period" end date. If the year of the end date is empty, the year of the start date also must be empty. In this case, it means the deny maintenance period recurs every year. The date is in format yyyy-mm-dd i.e., 2020-11-01, or mm-dd, i.e., 11-01
            "startDate": "A String", # "deny maintenance period" start date. If the year of the start date is empty, the year of the end date also must be empty. In this case, it means the deny maintenance period recurs every year. The date is in format yyyy-mm-dd i.e., 2020-11-01, or mm-dd, i.e., 11-01
            "time": "A String", # Time in UTC when the "deny maintenance period" starts on start_date and ends on end_date. The time is in format: HH:mm:SS, i.e., 00:00:00
          },
        ],
        "edition": "A String", # Optional. The edition of the instance.
        "enableDataplexIntegration": True or False, # Optional. By default, Cloud SQL instances have schema extraction disabled for Dataplex. When this parameter is set to true, schema extraction for Dataplex on Cloud SQL instances is activated.
        "enableGoogleMlIntegration": True or False, # Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances.
        "insightsConfig": { # Insights configuration. This specifies when Cloud SQL Insights feature is enabled and optional configuration. # Insights configuration, for now relevant only for Postgres.
          "queryInsightsEnabled": True or False, # Whether Query Insights feature is enabled.
          "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. Default is 5.
          "queryStringLength": 42, # Maximum query length stored in bytes. Default value: 1024 bytes. Range: 256-4500 bytes. Query length more than this field value will be truncated to this value. When unset, query length will be the default value. Changing query length will restart the database.
          "recordApplicationTags": True or False, # Whether Query Insights will record application tags from query when enabled.
          "recordClientAddress": True or False, # Whether Query Insights will record client address when enabled.
        },
        "ipConfiguration": { # IP Management configuration. # The settings for IP Management. This allows to enable or disable the instance IP and manage which external networks can connect to the instance. The IPv4 address cannot be disabled for Second Generation instances.
          "allocatedIpRange": "A String", # The name of the allocated ip range for the private ip Cloud SQL instance. For example: "google-managed-services-default". If set, the instance ip will be created in the allocated range. The range name must comply with [RFC 1035](https://tools.ietf.org/html/rfc1035). Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?.`
          "authorizedNetworks": [ # The list of external networks that are allowed to connect to the instance using the IP. In 'CIDR' notation, also known as 'slash' notation (for example: `157.197.200.0/24`).
            { # An entry for an Access Control list.
              "expirationTime": "A String", # The time when this access control entry expires in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
              "kind": "A String", # This is always `sql#aclEntry`.
              "name": "A String", # Optional. A label to identify this entry.
              "value": "A String", # The allowlisted value for the access control list.
            },
          ],
          "enablePrivatePathForGoogleCloudServices": True or False, # Controls connectivity to private IP instances from Google services, such as BigQuery.
          "ipv4Enabled": True or False, # Whether the instance is assigned a public IP address or not.
          "privateNetwork": "A String", # The resource link for the VPC network from which the Cloud SQL instance is accessible for private IP. For example, `/projects/myProject/global/networks/default`. This setting can be updated, but it cannot be removed after it is set.
          "pscConfig": { # PSC settings for a Cloud SQL instance. # PSC settings for this instance.
            "allowedConsumerProjects": [ # Optional. The list of consumer projects that are allow-listed for PSC connections to this instance. This instance can be connected to with PSC from any network in these projects. Each consumer project in this list may be represented by a project number (numeric) or by a project id (alphanumeric).
              "A String",
            ],
            "pscAutoConnections": [ # Optional. The list of settings for requested Private Service Connect consumer endpoints that can be used to connect to this Cloud SQL instance.
              { # Settings for an automatically-setup Private Service Connect consumer endpoint that is used to connect to a Cloud SQL instance.
                "consumerNetwork": "A String", # The consumer network of this consumer endpoint. This must be a resource path that includes both the host project and the network name. For example, `projects/project1/global/networks/network1`. The consumer host project of this network might be different from the consumer service project.
                "consumerNetworkStatus": "A String", # The connection policy status of the consumer network.
                "consumerProject": "A String", # This is the project ID of consumer service project of this consumer endpoint. Optional. This is only applicable if consumer_network is a shared vpc network.
                "ipAddress": "A String", # The IP address of the consumer endpoint.
                "status": "A String", # The connection status of the consumer endpoint.
              },
            ],
            "pscEnabled": True or False, # Whether PSC connectivity is enabled for this instance.
          },
          "requireSsl": True or False, # Use `ssl_mode` instead. Whether SSL/TLS connections over IP are enforced. If set to false, then allow both non-SSL/non-TLS and SSL/TLS connections. For SSL/TLS connections, the client certificate won't be verified. If set to true, then only allow connections encrypted with SSL/TLS and with valid client certificates. If you want to enforce SSL/TLS without enforcing the requirement for valid client certificates, then use the `ssl_mode` flag instead of the legacy `require_ssl` flag.
          "serverCaMode": "A String", # Specify what type of CA is used for the server certificate.
          "sslMode": "A String", # Specify how SSL/TLS is enforced in database connections. If you must use the `require_ssl` flag for backward compatibility, then only the following value pairs are valid: For PostgreSQL and MySQL: * `ssl_mode=ALLOW_UNENCRYPTED_AND_ENCRYPTED` and `require_ssl=false` * `ssl_mode=ENCRYPTED_ONLY` and `require_ssl=false` * `ssl_mode=TRUSTED_CLIENT_CERTIFICATE_REQUIRED` and `require_ssl=true` For SQL Server: * `ssl_mode=ALLOW_UNENCRYPTED_AND_ENCRYPTED` and `require_ssl=false` * `ssl_mode=ENCRYPTED_ONLY` and `require_ssl=true` The value of `ssl_mode` has priority over the value of `require_ssl`. For example, for the pair `ssl_mode=ENCRYPTED_ONLY` and `require_ssl=false`, `ssl_mode=ENCRYPTED_ONLY` means accept only SSL connections, while `require_ssl=false` means accept both non-SSL and SSL connections. In this case, MySQL and PostgreSQL databases respect `ssl_mode` and accepts only SSL connections.
        },
        "kind": "A String", # This is always `sql#settings`.
        "locationPreference": { # Preferred location. This specifies where a Cloud SQL instance is located. Note that if the preferred location is not available, the instance will be located as close as possible within the region. Only one location may be specified. # The location preference settings. This allows the instance to be located as near as possible to either an App Engine app or Compute Engine zone for better performance. App Engine co-location was only applicable to First Generation instances.
          "followGaeApplication": "A String", # The App Engine application to follow, it must be in the same region as the Cloud SQL instance. WARNING: Changing this might restart the instance.
          "kind": "A String", # This is always `sql#locationPreference`.
          "secondaryZone": "A String", # The preferred Compute Engine zone for the secondary/failover (for example: us-central1-a, us-central1-b, etc.). To disable this field, set it to 'no_secondary_zone'.
          "zone": "A String", # The preferred Compute Engine zone (for example: us-central1-a, us-central1-b, etc.). WARNING: Changing this might restart the instance.
        },
        "maintenanceWindow": { # Maintenance window. This specifies when a Cloud SQL instance is restarted for system maintenance purposes. # The maintenance window for this instance. This specifies when the instance can be restarted for maintenance purposes.
          "day": 42, # Day of week - `MONDAY`, `TUESDAY`, `WEDNESDAY`, `THURSDAY`, `FRIDAY`, `SATURDAY`, or `SUNDAY`. Specify in the UTC time zone. Returned in output as an integer, 1 to 7, where `1` equals Monday.
          "hour": 42, # Hour of day - 0 to 23. Specify in the UTC time zone.
          "kind": "A String", # This is always `sql#maintenanceWindow`.
          "updateTrack": "A String", # Maintenance timing settings: `canary`, `stable`, or `week5`. For more information, see [About maintenance on Cloud SQL instances](https://cloud.google.com/sql/docs/mysql/maintenance).
        },
        "passwordValidationPolicy": { # Database instance local user password validation policy # The local user password validation policy of the instance.
          "complexity": "A String", # The complexity of the password.
          "disallowCompromisedCredentials": True or False, # This field is deprecated and will be removed in a future version of the API.
          "disallowUsernameSubstring": True or False, # Disallow username as a part of the password.
          "enablePasswordPolicy": True or False, # Whether the password policy is enabled or not.
          "minLength": 42, # Minimum number of characters allowed.
          "passwordChangeInterval": "A String", # Minimum interval after which the password can be changed. This flag is only supported for PostgreSQL.
          "reuseInterval": 42, # Number of previous passwords that cannot be reused.
        },
        "pricingPlan": "A String", # The pricing plan for this instance. This can be either `PER_USE` or `PACKAGE`. Only `PER_USE` is supported for Second Generation instances.
        "replicationType": "A String", # The type of replication this instance uses. This can be either `ASYNCHRONOUS` or `SYNCHRONOUS`. (Deprecated) This property was only applicable to First Generation instances.
        "settingsVersion": "A String", # The version of instance settings. This is a required field for update method to make sure concurrent updates are handled properly. During update, use the most recent settingsVersion value for this instance and do not try to update this value.
        "sqlServerAuditConfig": { # SQL Server specific audit configuration. # SQL Server specific audit configuration.
          "bucket": "A String", # The name of the destination bucket (e.g., gs://mybucket).
          "kind": "A String", # This is always sql#sqlServerAuditConfig
          "retentionInterval": "A String", # How long to keep generated audit files.
          "uploadInterval": "A String", # How often to upload generated audit files.
        },
        "storageAutoResize": True or False, # Configuration to increase storage size automatically. The default value is true.
        "storageAutoResizeLimit": "A String", # The maximum size to which storage capacity can be automatically increased. The default value is 0, which specifies that there is no limit.
        "tier": "A String", # The tier (or machine type) for this instance, for example `db-custom-1-3840`. WARNING: Changing this restarts the instance.
        "timeZone": "A String", # Server timezone, relevant only for Cloud SQL for SQL Server.
        "userLabels": { # User-provided labels, represented as a dictionary where each label is a single key value pair.
          "a_key": "A String",
        },
      },
      "sqlNetworkArchitecture": "A String", # The SQL network architecture for the instance.
      "state": "A String", # The current serving state of the Cloud SQL instance.
      "suspensionReason": [ # If the instance state is SUSPENDED, the reason for the suspension.
        "A String",
      ],
      "switchTransactionLogsToCloudStorageEnabled": True or False, # Input only. Whether Cloud SQL is enabled to switch storing point-in-time recovery log files from a data disk to Cloud Storage.
      "upgradableDatabaseVersions": [ # Output only. All database versions that are available for upgrade.
        { # An available database version. It can be a major or a minor version.
          "displayName": "A String", # The database version's display name.
          "majorVersion": "A String", # The version's major version name.
          "name": "A String", # The database version name. For MySQL 8.0, this string provides the database major and minor version.
        },
      ],
      "writeEndpoint": "A String", # Output only. The dns name of the primary instance in a replication group.
    },
  ],
  "kind": "A String", # This is always `sql#instancesList`.
  "nextPageToken": "A String", # The continuation token, used to page through large result sets. Provide this value in a subsequent request to return the next page of results.
  "warnings": [ # List of warnings that occurred while handling the request.
    { # An Admin API warning message.
      "code": "A String", # Code to uniquely identify the warning type.
      "message": "A String", # The warning message.
      "region": "A String", # The region name for REGION_UNREACHABLE warning.
    },
  ],
}
listServerCas(project, instance, x__xgafv=None)
Lists all of the trusted Certificate Authorities (CAs) for the specified instance. There can be up to three CAs listed: the CA that was used to sign the certificate that is currently in use, a CA that has been added but not yet used to sign a certificate, and a CA used to sign a certificate that has previously rotated out.

Args:
  project: string, Project ID of the project that contains the instance. (required)
  instance: string, Cloud SQL instance ID. This does not include the project ID. (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Instances ListServerCas response.
  "activeVersion": "A String",
  "certs": [ # List of server CA certificates for the instance.
    { # SslCerts Resource
      "cert": "A String", # PEM representation.
      "certSerialNumber": "A String", # Serial number, as extracted from the certificate.
      "commonName": "A String", # User supplied name. Constrained to [a-zA-Z.-_ ]+.
      "createTime": "A String", # The time when the certificate was created in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
      "expirationTime": "A String", # The time when the certificate expires in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
      "instance": "A String", # Name of the database instance.
      "kind": "A String", # This is always `sql#sslCert`.
      "selfLink": "A String", # The URI of this resource.
      "sha1Fingerprint": "A String", # Sha1 Fingerprint.
    },
  ],
  "kind": "A String", # This is always `sql#instancesListServerCas`.
}
list_next()
Retrieves the next page of results.

        Args:
          previous_request: The request for the previous page. (required)
          previous_response: The response from the request for the previous page. (required)

        Returns:
          A request object that you can call 'execute()' on to request the next
          page. Returns None if there are no more items in the collection.
        
patch(project, instance, body=None, x__xgafv=None)
Partially updates settings of a Cloud SQL instance by merging the request with the current configuration. This method supports patch semantics.

Args:
  project: string, Project ID of the project that contains the instance. (required)
  instance: string, Cloud SQL instance ID. This does not include the project ID. (required)
  body: object, The request body.
    The object takes the form of:

{ # A Cloud SQL instance resource.
  "availableMaintenanceVersions": [ # Output only. List all maintenance versions applicable on the instance
    "A String",
  ],
  "backendType": "A String", # The backend type. `SECOND_GEN`: Cloud SQL database instance. `EXTERNAL`: A database server that is not managed by Google. This property is read-only; use the `tier` property in the `settings` object to determine the database type.
  "connectionName": "A String", # Connection name of the Cloud SQL instance used in connection strings.
  "createTime": "A String", # Output only. The time when the instance was created in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "currentDiskSize": "A String", # The current disk usage of the instance in bytes. This property has been deprecated. Use the "cloudsql.googleapis.com/database/disk/bytes_used" metric in Cloud Monitoring API instead. Please see [this announcement](https://groups.google.com/d/msg/google-cloud-sql-announce/I_7-F9EBhT0/BtvFtdFeAgAJ) for details.
  "databaseInstalledVersion": "A String", # Output only. Stores the current database version running on the instance including minor version such as `MYSQL_8_0_18`.
  "databaseVersion": "A String", # The database engine type and version. The `databaseVersion` field cannot be changed after instance creation.
  "diskEncryptionConfiguration": { # Disk encryption configuration for an instance. # Disk encryption configuration specific to an instance.
    "kind": "A String", # This is always `sql#diskEncryptionConfiguration`.
    "kmsKeyName": "A String", # Resource name of KMS key for disk encryption
  },
  "diskEncryptionStatus": { # Disk encryption status for an instance. # Disk encryption status specific to an instance.
    "kind": "A String", # This is always `sql#diskEncryptionStatus`.
    "kmsKeyVersionName": "A String", # KMS key version used to encrypt the Cloud SQL instance resource
  },
  "dnsName": "A String", # Output only. The dns name of the instance.
  "etag": "A String", # This field is deprecated and will be removed from a future version of the API. Use the `settings.settingsVersion` field instead.
  "failoverReplica": { # The name and status of the failover replica.
    "available": True or False, # The availability status of the failover replica. A false status indicates that the failover replica is out of sync. The primary instance can only failover to the failover replica when the status is true.
    "name": "A String", # The name of the failover replica. If specified at instance creation, a failover replica is created for the instance. The name doesn't include the project ID.
  },
  "gceZone": "A String", # The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance.
  "geminiConfig": { # Gemini instance configuration. # Gemini instance configuration.
    "activeQueryEnabled": True or False, # Output only. Whether the active query is enabled.
    "entitled": True or False, # Output only. Whether Gemini is enabled.
    "flagRecommenderEnabled": True or False, # Output only. Whether the flag recommender is enabled.
    "googleVacuumMgmtEnabled": True or False, # Output only. Whether the vacuum management is enabled.
    "indexAdvisorEnabled": True or False, # Output only. Whether the index advisor is enabled.
    "oomSessionCancelEnabled": True or False, # Output only. Whether canceling the out-of-memory (OOM) session is enabled.
  },
  "instanceType": "A String", # The instance type.
  "ipAddresses": [ # The assigned IP addresses for the instance.
    { # Database instance IP mapping
      "ipAddress": "A String", # The IP address assigned.
      "timeToRetire": "A String", # The due time for this IP to be retired in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`. This field is only available when the IP is scheduled to be retired.
      "type": "A String", # The type of this IP address. A `PRIMARY` address is a public address that can accept incoming connections. A `PRIVATE` address is a private address that can accept incoming connections. An `OUTGOING` address is the source address of connections originating from the instance, if supported.
    },
  ],
  "ipv6Address": "A String", # The IPv6 address assigned to the instance. (Deprecated) This property was applicable only to First Generation instances.
  "kind": "A String", # This is always `sql#instance`.
  "maintenanceVersion": "A String", # The current software version on the instance.
  "masterInstanceName": "A String", # The name of the instance which will act as primary in the replication setup.
  "maxDiskSize": "A String", # The maximum disk size of the instance in bytes.
  "name": "A String", # Name of the Cloud SQL instance. This does not include the project ID.
  "onPremisesConfiguration": { # On-premises instance configuration. # Configuration specific to on-premises instances.
    "caCertificate": "A String", # PEM representation of the trusted CA's x509 certificate.
    "clientCertificate": "A String", # PEM representation of the replica's x509 certificate.
    "clientKey": "A String", # PEM representation of the replica's private key. The corresponsing public key is encoded in the client's certificate.
    "dumpFilePath": "A String", # The dump file to create the Cloud SQL replica.
    "hostPort": "A String", # The host and port of the on-premises instance in host:port format
    "kind": "A String", # This is always `sql#onPremisesConfiguration`.
    "password": "A String", # The password for connecting to on-premises instance.
    "selectedObjects": [ # Optional. A list of objects that the user selects for replication from an external source instance.
      { # A list of objects that the user selects for replication from an external source instance.
        "database": "A String", # Required. The name of the database to migrate.
      },
    ],
    "sourceInstance": { # Reference to another Cloud SQL instance. # The reference to Cloud SQL instance if the source is Cloud SQL.
      "name": "A String", # The name of the Cloud SQL instance being referenced. This does not include the project ID.
      "project": "A String", # The project ID of the Cloud SQL instance being referenced. The default is the same project ID as the instance references it.
      "region": "A String", # The region of the Cloud SQL instance being referenced.
    },
    "sslOption": "A String", # Optional. SslOption for replica connection to the on-premises source.
    "username": "A String", # The username for connecting to on-premises instance.
  },
  "outOfDiskReport": { # This message wraps up the information written by out-of-disk detection job. # This field represents the report generated by the proactive database wellness job for OutOfDisk issues. * Writers: * the proactive database wellness job for OOD. * Readers: * the proactive database wellness job
    "sqlMinRecommendedIncreaseSizeGb": 42, # The minimum recommended increase size in GigaBytes This field is consumed by the frontend * Writers: * the proactive database wellness job for OOD. * Readers:
    "sqlOutOfDiskState": "A String", # This field represents the state generated by the proactive database wellness job for OutOfDisk issues. * Writers: * the proactive database wellness job for OOD. * Readers: * the proactive database wellness job
  },
  "primaryDnsName": "A String", # Output only. DEPRECATED: please use write_endpoint instead.
  "project": "A String", # The project ID of the project containing the Cloud SQL instance. The Google apps domain is prefixed if applicable.
  "pscServiceAttachmentLink": "A String", # Output only. The link to service attachment of PSC instance.
  "region": "A String", # The geographical region of the Cloud SQL instance. It can be one of the [regions](https://cloud.google.com/sql/docs/mysql/locations#location-r) where Cloud SQL operates: For example, `asia-east1`, `europe-west1`, and `us-central1`. The default value is `us-central1`.
  "replicaConfiguration": { # Read-replica configuration for connecting to the primary instance. # Configuration specific to failover replicas and read replicas.
    "cascadableReplica": True or False, # Optional. Specifies if a SQL Server replica is a cascadable replica. A cascadable replica is a SQL Server cross region replica that supports replica(s) under it.
    "failoverTarget": True or False, # Specifies if the replica is the failover target. If the field is set to `true` the replica will be designated as a failover replica. In case the primary instance fails, the replica instance will be promoted as the new primary instance. Only one replica can be specified as failover target, and the replica has to be in different zone with the primary instance.
    "kind": "A String", # This is always `sql#replicaConfiguration`.
    "mysqlReplicaConfiguration": { # Read-replica configuration specific to MySQL databases. # MySQL specific configuration when replicating from a MySQL on-premises primary instance. Replication configuration information such as the username, password, certificates, and keys are not stored in the instance metadata. The configuration information is used only to set up the replication connection and is stored by MySQL in a file named `master.info` in the data directory.
      "caCertificate": "A String", # PEM representation of the trusted CA's x509 certificate.
      "clientCertificate": "A String", # PEM representation of the replica's x509 certificate.
      "clientKey": "A String", # PEM representation of the replica's private key. The corresponsing public key is encoded in the client's certificate.
      "connectRetryInterval": 42, # Seconds to wait between connect retries. MySQL's default is 60 seconds.
      "dumpFilePath": "A String", # Path to a SQL dump file in Google Cloud Storage from which the replica instance is to be created. The URI is in the form gs://bucketName/fileName. Compressed gzip files (.gz) are also supported. Dumps have the binlog co-ordinates from which replication begins. This can be accomplished by setting --master-data to 1 when using mysqldump.
      "kind": "A String", # This is always `sql#mysqlReplicaConfiguration`.
      "masterHeartbeatPeriod": "A String", # Interval in milliseconds between replication heartbeats.
      "password": "A String", # The password for the replication connection.
      "sslCipher": "A String", # A list of permissible ciphers to use for SSL encryption.
      "username": "A String", # The username for the replication connection.
      "verifyServerCertificate": True or False, # Whether or not to check the primary instance's Common Name value in the certificate that it sends during the SSL handshake.
    },
  },
  "replicaNames": [ # The replicas of the instance.
    "A String",
  ],
  "replicationCluster": { # A primary instance and disaster recovery (DR) replica pair. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure. Only applicable to MySQL. # A primary instance and disaster recovery (DR) replica pair. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance experiences regional failure. Only applicable to MySQL.
    "drReplica": True or False, # Output only. Read-only field that indicates whether the replica is a DR replica. This field is not set if the instance is a primary instance.
    "failoverDrReplicaName": "A String", # Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Set this field to a replica name to designate a DR replica for a primary instance. Remove the replica name to remove the DR replica designation.
    "psaWriteEndpoint": "A String", # Output only. If set, it indicates this instance has a private service access (PSA) dns endpoint that is pointing to the primary instance of the cluster. If this instance is the primary, the dns should be pointing to this instance. After Switchover or Replica failover, this DNS endpoint points to the promoted instance. This is a read-only field, returned to the user as information. This field can exist even if a standalone instance does not yet have a replica, or had a DR replica that was deleted.
  },
  "rootPassword": "A String", # Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances.
  "satisfiesPzi": True or False, # Output only. This status indicates whether the instance satisfies PZI. The status is reserved for future use.
  "satisfiesPzs": True or False, # This status indicates whether the instance satisfies PZS. The status is reserved for future use.
  "scheduledMaintenance": { # Any scheduled maintenance for this instance. # The start time of any upcoming scheduled maintenance for this instance.
    "canDefer": True or False,
    "canReschedule": True or False, # If the scheduled maintenance can be rescheduled.
    "scheduleDeadlineTime": "A String", # Maintenance cannot be rescheduled to start beyond this deadline.
    "startTime": "A String", # The start time of any upcoming scheduled maintenance for this instance.
  },
  "secondaryGceZone": "A String", # The Compute Engine zone that the failover instance is currently serving from for a regional instance. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary/failover zone.
  "selfLink": "A String", # The URI of this resource.
  "serverCaCert": { # SslCerts Resource # SSL configuration.
    "cert": "A String", # PEM representation.
    "certSerialNumber": "A String", # Serial number, as extracted from the certificate.
    "commonName": "A String", # User supplied name. Constrained to [a-zA-Z.-_ ]+.
    "createTime": "A String", # The time when the certificate was created in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
    "expirationTime": "A String", # The time when the certificate expires in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
    "instance": "A String", # Name of the database instance.
    "kind": "A String", # This is always `sql#sslCert`.
    "selfLink": "A String", # The URI of this resource.
    "sha1Fingerprint": "A String", # Sha1 Fingerprint.
  },
  "serviceAccountEmailAddress": "A String", # The service account email address assigned to the instance. \This property is read-only.
  "settings": { # Database instance settings. # The user settings.
    "activationPolicy": "A String", # The activation policy specifies when the instance is activated; it is applicable only when the instance state is RUNNABLE. Valid values: * `ALWAYS`: The instance is on, and remains so even in the absence of connection requests. * `NEVER`: The instance is off; it is not activated, even if a connection request arrives.
    "activeDirectoryConfig": { # Active Directory configuration, relevant only for Cloud SQL for SQL Server. # Active Directory configuration, relevant only for Cloud SQL for SQL Server.
      "domain": "A String", # The name of the domain (e.g., mydomain.com).
      "kind": "A String", # This is always sql#activeDirectoryConfig.
    },
    "advancedMachineFeatures": { # Specifies options for controlling advanced machine features. # Specifies advanced machine configuration for the instances relevant only for SQL Server.
      "threadsPerCore": 42, # The number of threads per physical core.
    },
    "authorizedGaeApplications": [ # The App Engine app IDs that can access this instance. (Deprecated) Applied to First Generation instances only.
      "A String",
    ],
    "availabilityType": "A String", # Availability type. Potential values: * `ZONAL`: The instance serves data from only one zone. Outages in that zone affect data accessibility. * `REGIONAL`: The instance can serve data from more than one zone in a region (it is highly available)./ For more information, see [Overview of the High Availability Configuration](https://cloud.google.com/sql/docs/mysql/high-availability).
    "backupConfiguration": { # Database instance backup configuration. # The daily backup configuration for the instance.
      "backupRetentionSettings": { # We currently only support backup retention by specifying the number of backups we will retain. # Backup retention settings.
        "retainedBackups": 42, # Depending on the value of retention_unit, this is used to determine if a backup needs to be deleted. If retention_unit is 'COUNT', we will retain this many backups.
        "retentionUnit": "A String", # The unit that 'retained_backups' represents.
      },
      "binaryLogEnabled": True or False, # (MySQL only) Whether binary log is enabled. If backup configuration is disabled, binarylog must be disabled as well.
      "enabled": True or False, # Whether this configuration is enabled.
      "kind": "A String", # This is always `sql#backupConfiguration`.
      "location": "A String", # Location of the backup
      "pointInTimeRecoveryEnabled": True or False, # Whether point in time recovery is enabled.
      "replicationLogArchivingEnabled": True or False, # Reserved for future use.
      "startTime": "A String", # Start time for the daily backup configuration in UTC timezone in the 24 hour format - `HH:MM`.
      "transactionLogRetentionDays": 42, # The number of days of transaction logs we retain for point in time restore, from 1-7.
      "transactionalLogStorageState": "A String", # Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery.
    },
    "collation": "A String", # The name of server Instance collation.
    "connectorEnforcement": "A String", # Specifies if connections must use Cloud SQL connectors. Option values include the following: `NOT_REQUIRED` (Cloud SQL instances can be connected without Cloud SQL Connectors) and `REQUIRED` (Only allow connections that use Cloud SQL Connectors) Note that using REQUIRED disables all existing authorized networks. If this field is not specified when creating a new instance, NOT_REQUIRED is used. If this field is not specified when patching or updating an existing instance, it is left unchanged in the instance.
    "crashSafeReplicationEnabled": True or False, # Configuration specific to read replica instances. Indicates whether database flags for crash-safe replication are enabled. This property was only applicable to First Generation instances.
    "dataCacheConfig": { # Data cache configurations. # Configuration for data cache.
      "dataCacheEnabled": True or False, # Whether data cache is enabled for the instance.
    },
    "dataDiskSizeGb": "A String", # The size of data disk, in GB. The data disk size minimum is 10GB.
    "dataDiskType": "A String", # The type of data disk: `PD_SSD` (default) or `PD_HDD`. Not used for First Generation instances.
    "databaseFlags": [ # The database flags passed to the instance at startup.
      { # Database flags for Cloud SQL instances.
        "name": "A String", # The name of the flag. These flags are passed at instance startup, so include both server options and system variables. Flags are specified with underscores, not hyphens. For more information, see [Configuring Database Flags](https://cloud.google.com/sql/docs/mysql/flags) in the Cloud SQL documentation.
        "value": "A String", # The value of the flag. Boolean flags are set to `on` for true and `off` for false. This field must be omitted if the flag doesn't take a value.
      },
    ],
    "databaseReplicationEnabled": True or False, # Configuration specific to read replica instances. Indicates whether replication is enabled or not. WARNING: Changing this restarts the instance.
    "deletionProtectionEnabled": True or False, # Configuration to protect against accidental instance deletion.
    "denyMaintenancePeriods": [ # Deny maintenance periods
      { # Deny Maintenance Periods. This specifies a date range during when all CSA rollout will be denied.
        "endDate": "A String", # "deny maintenance period" end date. If the year of the end date is empty, the year of the start date also must be empty. In this case, it means the deny maintenance period recurs every year. The date is in format yyyy-mm-dd i.e., 2020-11-01, or mm-dd, i.e., 11-01
        "startDate": "A String", # "deny maintenance period" start date. If the year of the start date is empty, the year of the end date also must be empty. In this case, it means the deny maintenance period recurs every year. The date is in format yyyy-mm-dd i.e., 2020-11-01, or mm-dd, i.e., 11-01
        "time": "A String", # Time in UTC when the "deny maintenance period" starts on start_date and ends on end_date. The time is in format: HH:mm:SS, i.e., 00:00:00
      },
    ],
    "edition": "A String", # Optional. The edition of the instance.
    "enableDataplexIntegration": True or False, # Optional. By default, Cloud SQL instances have schema extraction disabled for Dataplex. When this parameter is set to true, schema extraction for Dataplex on Cloud SQL instances is activated.
    "enableGoogleMlIntegration": True or False, # Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances.
    "insightsConfig": { # Insights configuration. This specifies when Cloud SQL Insights feature is enabled and optional configuration. # Insights configuration, for now relevant only for Postgres.
      "queryInsightsEnabled": True or False, # Whether Query Insights feature is enabled.
      "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. Default is 5.
      "queryStringLength": 42, # Maximum query length stored in bytes. Default value: 1024 bytes. Range: 256-4500 bytes. Query length more than this field value will be truncated to this value. When unset, query length will be the default value. Changing query length will restart the database.
      "recordApplicationTags": True or False, # Whether Query Insights will record application tags from query when enabled.
      "recordClientAddress": True or False, # Whether Query Insights will record client address when enabled.
    },
    "ipConfiguration": { # IP Management configuration. # The settings for IP Management. This allows to enable or disable the instance IP and manage which external networks can connect to the instance. The IPv4 address cannot be disabled for Second Generation instances.
      "allocatedIpRange": "A String", # The name of the allocated ip range for the private ip Cloud SQL instance. For example: "google-managed-services-default". If set, the instance ip will be created in the allocated range. The range name must comply with [RFC 1035](https://tools.ietf.org/html/rfc1035). Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?.`
      "authorizedNetworks": [ # The list of external networks that are allowed to connect to the instance using the IP. In 'CIDR' notation, also known as 'slash' notation (for example: `157.197.200.0/24`).
        { # An entry for an Access Control list.
          "expirationTime": "A String", # The time when this access control entry expires in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
          "kind": "A String", # This is always `sql#aclEntry`.
          "name": "A String", # Optional. A label to identify this entry.
          "value": "A String", # The allowlisted value for the access control list.
        },
      ],
      "enablePrivatePathForGoogleCloudServices": True or False, # Controls connectivity to private IP instances from Google services, such as BigQuery.
      "ipv4Enabled": True or False, # Whether the instance is assigned a public IP address or not.
      "privateNetwork": "A String", # The resource link for the VPC network from which the Cloud SQL instance is accessible for private IP. For example, `/projects/myProject/global/networks/default`. This setting can be updated, but it cannot be removed after it is set.
      "pscConfig": { # PSC settings for a Cloud SQL instance. # PSC settings for this instance.
        "allowedConsumerProjects": [ # Optional. The list of consumer projects that are allow-listed for PSC connections to this instance. This instance can be connected to with PSC from any network in these projects. Each consumer project in this list may be represented by a project number (numeric) or by a project id (alphanumeric).
          "A String",
        ],
        "pscAutoConnections": [ # Optional. The list of settings for requested Private Service Connect consumer endpoints that can be used to connect to this Cloud SQL instance.
          { # Settings for an automatically-setup Private Service Connect consumer endpoint that is used to connect to a Cloud SQL instance.
            "consumerNetwork": "A String", # The consumer network of this consumer endpoint. This must be a resource path that includes both the host project and the network name. For example, `projects/project1/global/networks/network1`. The consumer host project of this network might be different from the consumer service project.
            "consumerNetworkStatus": "A String", # The connection policy status of the consumer network.
            "consumerProject": "A String", # This is the project ID of consumer service project of this consumer endpoint. Optional. This is only applicable if consumer_network is a shared vpc network.
            "ipAddress": "A String", # The IP address of the consumer endpoint.
            "status": "A String", # The connection status of the consumer endpoint.
          },
        ],
        "pscEnabled": True or False, # Whether PSC connectivity is enabled for this instance.
      },
      "requireSsl": True or False, # Use `ssl_mode` instead. Whether SSL/TLS connections over IP are enforced. If set to false, then allow both non-SSL/non-TLS and SSL/TLS connections. For SSL/TLS connections, the client certificate won't be verified. If set to true, then only allow connections encrypted with SSL/TLS and with valid client certificates. If you want to enforce SSL/TLS without enforcing the requirement for valid client certificates, then use the `ssl_mode` flag instead of the legacy `require_ssl` flag.
      "serverCaMode": "A String", # Specify what type of CA is used for the server certificate.
      "sslMode": "A String", # Specify how SSL/TLS is enforced in database connections. If you must use the `require_ssl` flag for backward compatibility, then only the following value pairs are valid: For PostgreSQL and MySQL: * `ssl_mode=ALLOW_UNENCRYPTED_AND_ENCRYPTED` and `require_ssl=false` * `ssl_mode=ENCRYPTED_ONLY` and `require_ssl=false` * `ssl_mode=TRUSTED_CLIENT_CERTIFICATE_REQUIRED` and `require_ssl=true` For SQL Server: * `ssl_mode=ALLOW_UNENCRYPTED_AND_ENCRYPTED` and `require_ssl=false` * `ssl_mode=ENCRYPTED_ONLY` and `require_ssl=true` The value of `ssl_mode` has priority over the value of `require_ssl`. For example, for the pair `ssl_mode=ENCRYPTED_ONLY` and `require_ssl=false`, `ssl_mode=ENCRYPTED_ONLY` means accept only SSL connections, while `require_ssl=false` means accept both non-SSL and SSL connections. In this case, MySQL and PostgreSQL databases respect `ssl_mode` and accepts only SSL connections.
    },
    "kind": "A String", # This is always `sql#settings`.
    "locationPreference": { # Preferred location. This specifies where a Cloud SQL instance is located. Note that if the preferred location is not available, the instance will be located as close as possible within the region. Only one location may be specified. # The location preference settings. This allows the instance to be located as near as possible to either an App Engine app or Compute Engine zone for better performance. App Engine co-location was only applicable to First Generation instances.
      "followGaeApplication": "A String", # The App Engine application to follow, it must be in the same region as the Cloud SQL instance. WARNING: Changing this might restart the instance.
      "kind": "A String", # This is always `sql#locationPreference`.
      "secondaryZone": "A String", # The preferred Compute Engine zone for the secondary/failover (for example: us-central1-a, us-central1-b, etc.). To disable this field, set it to 'no_secondary_zone'.
      "zone": "A String", # The preferred Compute Engine zone (for example: us-central1-a, us-central1-b, etc.). WARNING: Changing this might restart the instance.
    },
    "maintenanceWindow": { # Maintenance window. This specifies when a Cloud SQL instance is restarted for system maintenance purposes. # The maintenance window for this instance. This specifies when the instance can be restarted for maintenance purposes.
      "day": 42, # Day of week - `MONDAY`, `TUESDAY`, `WEDNESDAY`, `THURSDAY`, `FRIDAY`, `SATURDAY`, or `SUNDAY`. Specify in the UTC time zone. Returned in output as an integer, 1 to 7, where `1` equals Monday.
      "hour": 42, # Hour of day - 0 to 23. Specify in the UTC time zone.
      "kind": "A String", # This is always `sql#maintenanceWindow`.
      "updateTrack": "A String", # Maintenance timing settings: `canary`, `stable`, or `week5`. For more information, see [About maintenance on Cloud SQL instances](https://cloud.google.com/sql/docs/mysql/maintenance).
    },
    "passwordValidationPolicy": { # Database instance local user password validation policy # The local user password validation policy of the instance.
      "complexity": "A String", # The complexity of the password.
      "disallowCompromisedCredentials": True or False, # This field is deprecated and will be removed in a future version of the API.
      "disallowUsernameSubstring": True or False, # Disallow username as a part of the password.
      "enablePasswordPolicy": True or False, # Whether the password policy is enabled or not.
      "minLength": 42, # Minimum number of characters allowed.
      "passwordChangeInterval": "A String", # Minimum interval after which the password can be changed. This flag is only supported for PostgreSQL.
      "reuseInterval": 42, # Number of previous passwords that cannot be reused.
    },
    "pricingPlan": "A String", # The pricing plan for this instance. This can be either `PER_USE` or `PACKAGE`. Only `PER_USE` is supported for Second Generation instances.
    "replicationType": "A String", # The type of replication this instance uses. This can be either `ASYNCHRONOUS` or `SYNCHRONOUS`. (Deprecated) This property was only applicable to First Generation instances.
    "settingsVersion": "A String", # The version of instance settings. This is a required field for update method to make sure concurrent updates are handled properly. During update, use the most recent settingsVersion value for this instance and do not try to update this value.
    "sqlServerAuditConfig": { # SQL Server specific audit configuration. # SQL Server specific audit configuration.
      "bucket": "A String", # The name of the destination bucket (e.g., gs://mybucket).
      "kind": "A String", # This is always sql#sqlServerAuditConfig
      "retentionInterval": "A String", # How long to keep generated audit files.
      "uploadInterval": "A String", # How often to upload generated audit files.
    },
    "storageAutoResize": True or False, # Configuration to increase storage size automatically. The default value is true.
    "storageAutoResizeLimit": "A String", # The maximum size to which storage capacity can be automatically increased. The default value is 0, which specifies that there is no limit.
    "tier": "A String", # The tier (or machine type) for this instance, for example `db-custom-1-3840`. WARNING: Changing this restarts the instance.
    "timeZone": "A String", # Server timezone, relevant only for Cloud SQL for SQL Server.
    "userLabels": { # User-provided labels, represented as a dictionary where each label is a single key value pair.
      "a_key": "A String",
    },
  },
  "sqlNetworkArchitecture": "A String", # The SQL network architecture for the instance.
  "state": "A String", # The current serving state of the Cloud SQL instance.
  "suspensionReason": [ # If the instance state is SUSPENDED, the reason for the suspension.
    "A String",
  ],
  "switchTransactionLogsToCloudStorageEnabled": True or False, # Input only. Whether Cloud SQL is enabled to switch storing point-in-time recovery log files from a data disk to Cloud Storage.
  "upgradableDatabaseVersions": [ # Output only. All database versions that are available for upgrade.
    { # An available database version. It can be a major or a minor version.
      "displayName": "A String", # The database version's display name.
      "majorVersion": "A String", # The version's major version name.
      "name": "A String", # The database version name. For MySQL 8.0, this string provides the database major and minor version.
    },
  ],
  "writeEndpoint": "A String", # Output only. The dns name of the primary instance in a replication group.
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
promoteReplica(project, instance, failover=None, x__xgafv=None)
Promotes the read replica instance to be an independent Cloud SQL primary instance. Using this operation might cause your instance to restart.

Args:
  project: string, ID of the project that contains the read replica. (required)
  instance: string, Cloud SQL read replica instance name. (required)
  failover: boolean, Set to true to invoke a replica failover to the designated DR replica. As part of replica failover, the promote operation attempts to add the original primary instance as a replica of the promoted DR replica when the original primary instance comes back online. If set to false or not specified, then the original primary instance becomes an independent Cloud SQL primary instance. Only applicable to MySQL.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
reencrypt(project, instance, body=None, x__xgafv=None)
Reencrypt CMEK instance with latest key version.

Args:
  project: string, ID of the project that contains the instance. (required)
  instance: string, Cloud SQL instance ID. This does not include the project ID. (required)
  body: object, The request body.
    The object takes the form of:

{ # Database Instance reencrypt request.
  "backupReencryptionConfig": { # Backup Reencryption Config # Configuration specific to backup re-encryption
    "backupLimit": 42, # Backup re-encryption limit
    "backupType": "A String", # Type of backups users want to re-encrypt.
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
releaseSsrsLease(project, instance, x__xgafv=None)
Release a lease for the setup of SQL Server Reporting Services (SSRS).

Args:
  project: string, Required. The ID of the project that contains the instance (Example: project-id). (required)
  instance: string, Required. The Cloud SQL instance ID. This doesn't include the project ID. It's composed of lowercase letters, numbers, and hyphens, and it must start with a letter. The total length must be 98 characters or less (Example: instance-id). (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # The response for the release of the SSRS lease.
  "operationId": "A String", # The operation ID.
}
resetSslConfig(project, instance, x__xgafv=None)
Deletes all client certificates and generates a new server SSL certificate for the instance.

Args:
  project: string, Project ID of the project that contains the instance. (required)
  instance: string, Cloud SQL instance ID. This does not include the project ID. (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
restart(project, instance, x__xgafv=None)
Restarts a Cloud SQL instance.

Args:
  project: string, Project ID of the project that contains the instance to be restarted. (required)
  instance: string, Cloud SQL instance ID. This does not include the project ID. (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
restoreBackup(project, instance, body=None, x__xgafv=None)
Restores a backup of a Cloud SQL instance. Using this operation might cause your instance to restart.

Args:
  project: string, Project ID of the project that contains the instance. (required)
  instance: string, Cloud SQL instance ID. This does not include the project ID. (required)
  body: object, The request body.
    The object takes the form of:

{ # Database instance restore backup request.
  "restoreBackupContext": { # Database instance restore from backup context. Backup context contains source instance id and project id. # Parameters required to perform the restore backup operation.
    "backupRunId": "A String", # The ID of the backup run to restore from.
    "instanceId": "A String", # The ID of the instance that the backup was taken from.
    "kind": "A String", # This is always `sql#restoreBackupContext`.
    "project": "A String", # The full project ID of the source instance.
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
rotateServerCa(project, instance, body=None, x__xgafv=None)
Rotates the server certificate to one signed by the Certificate Authority (CA) version previously added with the addServerCA method. For instances that have enabled Certificate Authority Service (CAS) based server CA, use RotateServerCertificate to rotate the server certificate.

Args:
  project: string, Project ID of the project that contains the instance. (required)
  instance: string, Cloud SQL instance ID. This does not include the project ID. (required)
  body: object, The request body.
    The object takes the form of:

{ # Rotate Server CA request.
  "rotateServerCaContext": { # Instance rotate server CA context. # Contains details about the rotate server CA operation.
    "kind": "A String", # This is always `sql#rotateServerCaContext`.
    "nextVersion": "A String", # The fingerprint of the next version to be rotated to. If left unspecified, will be rotated to the most recently added server CA version.
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
startReplica(project, instance, x__xgafv=None)
Starts the replication in the read replica instance.

Args:
  project: string, ID of the project that contains the read replica. (required)
  instance: string, Cloud SQL read replica instance name. (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
stopReplica(project, instance, x__xgafv=None)
Stops the replication in the read replica instance.

Args:
  project: string, ID of the project that contains the read replica. (required)
  instance: string, Cloud SQL read replica instance name. (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
switchover(project, instance, dbTimeout=None, x__xgafv=None)
Switches over from the primary instance to the designated DR replica instance.

Args:
  project: string, ID of the project that contains the replica. (required)
  instance: string, Cloud SQL read replica instance name. (required)
  dbTimeout: string, Optional. (MySQL only) Cloud SQL instance operations timeout, which is a sum of all database operations. Default value is 10 minutes and can be modified to a maximum value of 24 hours.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
truncateLog(project, instance, body=None, x__xgafv=None)
Truncate MySQL general and slow query log tables MySQL only.

Args:
  project: string, Project ID of the Cloud SQL project. (required)
  instance: string, Cloud SQL instance ID. This does not include the project ID. (required)
  body: object, The request body.
    The object takes the form of:

{ # Instance truncate log request.
  "truncateLogContext": { # Database Instance truncate log context. # Contains details about the truncate log operation.
    "kind": "A String", # This is always `sql#truncateLogContext`.
    "logType": "A String", # The type of log to truncate. Valid values are `MYSQL_GENERAL_TABLE` and `MYSQL_SLOW_TABLE`.
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}
update(project, instance, body=None, x__xgafv=None)
Updates settings of a Cloud SQL instance. Using this operation might cause your instance to restart.

Args:
  project: string, Project ID of the project that contains the instance. (required)
  instance: string, Cloud SQL instance ID. This does not include the project ID. (required)
  body: object, The request body.
    The object takes the form of:

{ # A Cloud SQL instance resource.
  "availableMaintenanceVersions": [ # Output only. List all maintenance versions applicable on the instance
    "A String",
  ],
  "backendType": "A String", # The backend type. `SECOND_GEN`: Cloud SQL database instance. `EXTERNAL`: A database server that is not managed by Google. This property is read-only; use the `tier` property in the `settings` object to determine the database type.
  "connectionName": "A String", # Connection name of the Cloud SQL instance used in connection strings.
  "createTime": "A String", # Output only. The time when the instance was created in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "currentDiskSize": "A String", # The current disk usage of the instance in bytes. This property has been deprecated. Use the "cloudsql.googleapis.com/database/disk/bytes_used" metric in Cloud Monitoring API instead. Please see [this announcement](https://groups.google.com/d/msg/google-cloud-sql-announce/I_7-F9EBhT0/BtvFtdFeAgAJ) for details.
  "databaseInstalledVersion": "A String", # Output only. Stores the current database version running on the instance including minor version such as `MYSQL_8_0_18`.
  "databaseVersion": "A String", # The database engine type and version. The `databaseVersion` field cannot be changed after instance creation.
  "diskEncryptionConfiguration": { # Disk encryption configuration for an instance. # Disk encryption configuration specific to an instance.
    "kind": "A String", # This is always `sql#diskEncryptionConfiguration`.
    "kmsKeyName": "A String", # Resource name of KMS key for disk encryption
  },
  "diskEncryptionStatus": { # Disk encryption status for an instance. # Disk encryption status specific to an instance.
    "kind": "A String", # This is always `sql#diskEncryptionStatus`.
    "kmsKeyVersionName": "A String", # KMS key version used to encrypt the Cloud SQL instance resource
  },
  "dnsName": "A String", # Output only. The dns name of the instance.
  "etag": "A String", # This field is deprecated and will be removed from a future version of the API. Use the `settings.settingsVersion` field instead.
  "failoverReplica": { # The name and status of the failover replica.
    "available": True or False, # The availability status of the failover replica. A false status indicates that the failover replica is out of sync. The primary instance can only failover to the failover replica when the status is true.
    "name": "A String", # The name of the failover replica. If specified at instance creation, a failover replica is created for the instance. The name doesn't include the project ID.
  },
  "gceZone": "A String", # The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance.
  "geminiConfig": { # Gemini instance configuration. # Gemini instance configuration.
    "activeQueryEnabled": True or False, # Output only. Whether the active query is enabled.
    "entitled": True or False, # Output only. Whether Gemini is enabled.
    "flagRecommenderEnabled": True or False, # Output only. Whether the flag recommender is enabled.
    "googleVacuumMgmtEnabled": True or False, # Output only. Whether the vacuum management is enabled.
    "indexAdvisorEnabled": True or False, # Output only. Whether the index advisor is enabled.
    "oomSessionCancelEnabled": True or False, # Output only. Whether canceling the out-of-memory (OOM) session is enabled.
  },
  "instanceType": "A String", # The instance type.
  "ipAddresses": [ # The assigned IP addresses for the instance.
    { # Database instance IP mapping
      "ipAddress": "A String", # The IP address assigned.
      "timeToRetire": "A String", # The due time for this IP to be retired in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`. This field is only available when the IP is scheduled to be retired.
      "type": "A String", # The type of this IP address. A `PRIMARY` address is a public address that can accept incoming connections. A `PRIVATE` address is a private address that can accept incoming connections. An `OUTGOING` address is the source address of connections originating from the instance, if supported.
    },
  ],
  "ipv6Address": "A String", # The IPv6 address assigned to the instance. (Deprecated) This property was applicable only to First Generation instances.
  "kind": "A String", # This is always `sql#instance`.
  "maintenanceVersion": "A String", # The current software version on the instance.
  "masterInstanceName": "A String", # The name of the instance which will act as primary in the replication setup.
  "maxDiskSize": "A String", # The maximum disk size of the instance in bytes.
  "name": "A String", # Name of the Cloud SQL instance. This does not include the project ID.
  "onPremisesConfiguration": { # On-premises instance configuration. # Configuration specific to on-premises instances.
    "caCertificate": "A String", # PEM representation of the trusted CA's x509 certificate.
    "clientCertificate": "A String", # PEM representation of the replica's x509 certificate.
    "clientKey": "A String", # PEM representation of the replica's private key. The corresponsing public key is encoded in the client's certificate.
    "dumpFilePath": "A String", # The dump file to create the Cloud SQL replica.
    "hostPort": "A String", # The host and port of the on-premises instance in host:port format
    "kind": "A String", # This is always `sql#onPremisesConfiguration`.
    "password": "A String", # The password for connecting to on-premises instance.
    "selectedObjects": [ # Optional. A list of objects that the user selects for replication from an external source instance.
      { # A list of objects that the user selects for replication from an external source instance.
        "database": "A String", # Required. The name of the database to migrate.
      },
    ],
    "sourceInstance": { # Reference to another Cloud SQL instance. # The reference to Cloud SQL instance if the source is Cloud SQL.
      "name": "A String", # The name of the Cloud SQL instance being referenced. This does not include the project ID.
      "project": "A String", # The project ID of the Cloud SQL instance being referenced. The default is the same project ID as the instance references it.
      "region": "A String", # The region of the Cloud SQL instance being referenced.
    },
    "sslOption": "A String", # Optional. SslOption for replica connection to the on-premises source.
    "username": "A String", # The username for connecting to on-premises instance.
  },
  "outOfDiskReport": { # This message wraps up the information written by out-of-disk detection job. # This field represents the report generated by the proactive database wellness job for OutOfDisk issues. * Writers: * the proactive database wellness job for OOD. * Readers: * the proactive database wellness job
    "sqlMinRecommendedIncreaseSizeGb": 42, # The minimum recommended increase size in GigaBytes This field is consumed by the frontend * Writers: * the proactive database wellness job for OOD. * Readers:
    "sqlOutOfDiskState": "A String", # This field represents the state generated by the proactive database wellness job for OutOfDisk issues. * Writers: * the proactive database wellness job for OOD. * Readers: * the proactive database wellness job
  },
  "primaryDnsName": "A String", # Output only. DEPRECATED: please use write_endpoint instead.
  "project": "A String", # The project ID of the project containing the Cloud SQL instance. The Google apps domain is prefixed if applicable.
  "pscServiceAttachmentLink": "A String", # Output only. The link to service attachment of PSC instance.
  "region": "A String", # The geographical region of the Cloud SQL instance. It can be one of the [regions](https://cloud.google.com/sql/docs/mysql/locations#location-r) where Cloud SQL operates: For example, `asia-east1`, `europe-west1`, and `us-central1`. The default value is `us-central1`.
  "replicaConfiguration": { # Read-replica configuration for connecting to the primary instance. # Configuration specific to failover replicas and read replicas.
    "cascadableReplica": True or False, # Optional. Specifies if a SQL Server replica is a cascadable replica. A cascadable replica is a SQL Server cross region replica that supports replica(s) under it.
    "failoverTarget": True or False, # Specifies if the replica is the failover target. If the field is set to `true` the replica will be designated as a failover replica. In case the primary instance fails, the replica instance will be promoted as the new primary instance. Only one replica can be specified as failover target, and the replica has to be in different zone with the primary instance.
    "kind": "A String", # This is always `sql#replicaConfiguration`.
    "mysqlReplicaConfiguration": { # Read-replica configuration specific to MySQL databases. # MySQL specific configuration when replicating from a MySQL on-premises primary instance. Replication configuration information such as the username, password, certificates, and keys are not stored in the instance metadata. The configuration information is used only to set up the replication connection and is stored by MySQL in a file named `master.info` in the data directory.
      "caCertificate": "A String", # PEM representation of the trusted CA's x509 certificate.
      "clientCertificate": "A String", # PEM representation of the replica's x509 certificate.
      "clientKey": "A String", # PEM representation of the replica's private key. The corresponsing public key is encoded in the client's certificate.
      "connectRetryInterval": 42, # Seconds to wait between connect retries. MySQL's default is 60 seconds.
      "dumpFilePath": "A String", # Path to a SQL dump file in Google Cloud Storage from which the replica instance is to be created. The URI is in the form gs://bucketName/fileName. Compressed gzip files (.gz) are also supported. Dumps have the binlog co-ordinates from which replication begins. This can be accomplished by setting --master-data to 1 when using mysqldump.
      "kind": "A String", # This is always `sql#mysqlReplicaConfiguration`.
      "masterHeartbeatPeriod": "A String", # Interval in milliseconds between replication heartbeats.
      "password": "A String", # The password for the replication connection.
      "sslCipher": "A String", # A list of permissible ciphers to use for SSL encryption.
      "username": "A String", # The username for the replication connection.
      "verifyServerCertificate": True or False, # Whether or not to check the primary instance's Common Name value in the certificate that it sends during the SSL handshake.
    },
  },
  "replicaNames": [ # The replicas of the instance.
    "A String",
  ],
  "replicationCluster": { # A primary instance and disaster recovery (DR) replica pair. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure. Only applicable to MySQL. # A primary instance and disaster recovery (DR) replica pair. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance experiences regional failure. Only applicable to MySQL.
    "drReplica": True or False, # Output only. Read-only field that indicates whether the replica is a DR replica. This field is not set if the instance is a primary instance.
    "failoverDrReplicaName": "A String", # Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Set this field to a replica name to designate a DR replica for a primary instance. Remove the replica name to remove the DR replica designation.
    "psaWriteEndpoint": "A String", # Output only. If set, it indicates this instance has a private service access (PSA) dns endpoint that is pointing to the primary instance of the cluster. If this instance is the primary, the dns should be pointing to this instance. After Switchover or Replica failover, this DNS endpoint points to the promoted instance. This is a read-only field, returned to the user as information. This field can exist even if a standalone instance does not yet have a replica, or had a DR replica that was deleted.
  },
  "rootPassword": "A String", # Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances.
  "satisfiesPzi": True or False, # Output only. This status indicates whether the instance satisfies PZI. The status is reserved for future use.
  "satisfiesPzs": True or False, # This status indicates whether the instance satisfies PZS. The status is reserved for future use.
  "scheduledMaintenance": { # Any scheduled maintenance for this instance. # The start time of any upcoming scheduled maintenance for this instance.
    "canDefer": True or False,
    "canReschedule": True or False, # If the scheduled maintenance can be rescheduled.
    "scheduleDeadlineTime": "A String", # Maintenance cannot be rescheduled to start beyond this deadline.
    "startTime": "A String", # The start time of any upcoming scheduled maintenance for this instance.
  },
  "secondaryGceZone": "A String", # The Compute Engine zone that the failover instance is currently serving from for a regional instance. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary/failover zone.
  "selfLink": "A String", # The URI of this resource.
  "serverCaCert": { # SslCerts Resource # SSL configuration.
    "cert": "A String", # PEM representation.
    "certSerialNumber": "A String", # Serial number, as extracted from the certificate.
    "commonName": "A String", # User supplied name. Constrained to [a-zA-Z.-_ ]+.
    "createTime": "A String", # The time when the certificate was created in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
    "expirationTime": "A String", # The time when the certificate expires in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
    "instance": "A String", # Name of the database instance.
    "kind": "A String", # This is always `sql#sslCert`.
    "selfLink": "A String", # The URI of this resource.
    "sha1Fingerprint": "A String", # Sha1 Fingerprint.
  },
  "serviceAccountEmailAddress": "A String", # The service account email address assigned to the instance. \This property is read-only.
  "settings": { # Database instance settings. # The user settings.
    "activationPolicy": "A String", # The activation policy specifies when the instance is activated; it is applicable only when the instance state is RUNNABLE. Valid values: * `ALWAYS`: The instance is on, and remains so even in the absence of connection requests. * `NEVER`: The instance is off; it is not activated, even if a connection request arrives.
    "activeDirectoryConfig": { # Active Directory configuration, relevant only for Cloud SQL for SQL Server. # Active Directory configuration, relevant only for Cloud SQL for SQL Server.
      "domain": "A String", # The name of the domain (e.g., mydomain.com).
      "kind": "A String", # This is always sql#activeDirectoryConfig.
    },
    "advancedMachineFeatures": { # Specifies options for controlling advanced machine features. # Specifies advanced machine configuration for the instances relevant only for SQL Server.
      "threadsPerCore": 42, # The number of threads per physical core.
    },
    "authorizedGaeApplications": [ # The App Engine app IDs that can access this instance. (Deprecated) Applied to First Generation instances only.
      "A String",
    ],
    "availabilityType": "A String", # Availability type. Potential values: * `ZONAL`: The instance serves data from only one zone. Outages in that zone affect data accessibility. * `REGIONAL`: The instance can serve data from more than one zone in a region (it is highly available)./ For more information, see [Overview of the High Availability Configuration](https://cloud.google.com/sql/docs/mysql/high-availability).
    "backupConfiguration": { # Database instance backup configuration. # The daily backup configuration for the instance.
      "backupRetentionSettings": { # We currently only support backup retention by specifying the number of backups we will retain. # Backup retention settings.
        "retainedBackups": 42, # Depending on the value of retention_unit, this is used to determine if a backup needs to be deleted. If retention_unit is 'COUNT', we will retain this many backups.
        "retentionUnit": "A String", # The unit that 'retained_backups' represents.
      },
      "binaryLogEnabled": True or False, # (MySQL only) Whether binary log is enabled. If backup configuration is disabled, binarylog must be disabled as well.
      "enabled": True or False, # Whether this configuration is enabled.
      "kind": "A String", # This is always `sql#backupConfiguration`.
      "location": "A String", # Location of the backup
      "pointInTimeRecoveryEnabled": True or False, # Whether point in time recovery is enabled.
      "replicationLogArchivingEnabled": True or False, # Reserved for future use.
      "startTime": "A String", # Start time for the daily backup configuration in UTC timezone in the 24 hour format - `HH:MM`.
      "transactionLogRetentionDays": 42, # The number of days of transaction logs we retain for point in time restore, from 1-7.
      "transactionalLogStorageState": "A String", # Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery.
    },
    "collation": "A String", # The name of server Instance collation.
    "connectorEnforcement": "A String", # Specifies if connections must use Cloud SQL connectors. Option values include the following: `NOT_REQUIRED` (Cloud SQL instances can be connected without Cloud SQL Connectors) and `REQUIRED` (Only allow connections that use Cloud SQL Connectors) Note that using REQUIRED disables all existing authorized networks. If this field is not specified when creating a new instance, NOT_REQUIRED is used. If this field is not specified when patching or updating an existing instance, it is left unchanged in the instance.
    "crashSafeReplicationEnabled": True or False, # Configuration specific to read replica instances. Indicates whether database flags for crash-safe replication are enabled. This property was only applicable to First Generation instances.
    "dataCacheConfig": { # Data cache configurations. # Configuration for data cache.
      "dataCacheEnabled": True or False, # Whether data cache is enabled for the instance.
    },
    "dataDiskSizeGb": "A String", # The size of data disk, in GB. The data disk size minimum is 10GB.
    "dataDiskType": "A String", # The type of data disk: `PD_SSD` (default) or `PD_HDD`. Not used for First Generation instances.
    "databaseFlags": [ # The database flags passed to the instance at startup.
      { # Database flags for Cloud SQL instances.
        "name": "A String", # The name of the flag. These flags are passed at instance startup, so include both server options and system variables. Flags are specified with underscores, not hyphens. For more information, see [Configuring Database Flags](https://cloud.google.com/sql/docs/mysql/flags) in the Cloud SQL documentation.
        "value": "A String", # The value of the flag. Boolean flags are set to `on` for true and `off` for false. This field must be omitted if the flag doesn't take a value.
      },
    ],
    "databaseReplicationEnabled": True or False, # Configuration specific to read replica instances. Indicates whether replication is enabled or not. WARNING: Changing this restarts the instance.
    "deletionProtectionEnabled": True or False, # Configuration to protect against accidental instance deletion.
    "denyMaintenancePeriods": [ # Deny maintenance periods
      { # Deny Maintenance Periods. This specifies a date range during when all CSA rollout will be denied.
        "endDate": "A String", # "deny maintenance period" end date. If the year of the end date is empty, the year of the start date also must be empty. In this case, it means the deny maintenance period recurs every year. The date is in format yyyy-mm-dd i.e., 2020-11-01, or mm-dd, i.e., 11-01
        "startDate": "A String", # "deny maintenance period" start date. If the year of the start date is empty, the year of the end date also must be empty. In this case, it means the deny maintenance period recurs every year. The date is in format yyyy-mm-dd i.e., 2020-11-01, or mm-dd, i.e., 11-01
        "time": "A String", # Time in UTC when the "deny maintenance period" starts on start_date and ends on end_date. The time is in format: HH:mm:SS, i.e., 00:00:00
      },
    ],
    "edition": "A String", # Optional. The edition of the instance.
    "enableDataplexIntegration": True or False, # Optional. By default, Cloud SQL instances have schema extraction disabled for Dataplex. When this parameter is set to true, schema extraction for Dataplex on Cloud SQL instances is activated.
    "enableGoogleMlIntegration": True or False, # Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances.
    "insightsConfig": { # Insights configuration. This specifies when Cloud SQL Insights feature is enabled and optional configuration. # Insights configuration, for now relevant only for Postgres.
      "queryInsightsEnabled": True or False, # Whether Query Insights feature is enabled.
      "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. Default is 5.
      "queryStringLength": 42, # Maximum query length stored in bytes. Default value: 1024 bytes. Range: 256-4500 bytes. Query length more than this field value will be truncated to this value. When unset, query length will be the default value. Changing query length will restart the database.
      "recordApplicationTags": True or False, # Whether Query Insights will record application tags from query when enabled.
      "recordClientAddress": True or False, # Whether Query Insights will record client address when enabled.
    },
    "ipConfiguration": { # IP Management configuration. # The settings for IP Management. This allows to enable or disable the instance IP and manage which external networks can connect to the instance. The IPv4 address cannot be disabled for Second Generation instances.
      "allocatedIpRange": "A String", # The name of the allocated ip range for the private ip Cloud SQL instance. For example: "google-managed-services-default". If set, the instance ip will be created in the allocated range. The range name must comply with [RFC 1035](https://tools.ietf.org/html/rfc1035). Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?.`
      "authorizedNetworks": [ # The list of external networks that are allowed to connect to the instance using the IP. In 'CIDR' notation, also known as 'slash' notation (for example: `157.197.200.0/24`).
        { # An entry for an Access Control list.
          "expirationTime": "A String", # The time when this access control entry expires in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
          "kind": "A String", # This is always `sql#aclEntry`.
          "name": "A String", # Optional. A label to identify this entry.
          "value": "A String", # The allowlisted value for the access control list.
        },
      ],
      "enablePrivatePathForGoogleCloudServices": True or False, # Controls connectivity to private IP instances from Google services, such as BigQuery.
      "ipv4Enabled": True or False, # Whether the instance is assigned a public IP address or not.
      "privateNetwork": "A String", # The resource link for the VPC network from which the Cloud SQL instance is accessible for private IP. For example, `/projects/myProject/global/networks/default`. This setting can be updated, but it cannot be removed after it is set.
      "pscConfig": { # PSC settings for a Cloud SQL instance. # PSC settings for this instance.
        "allowedConsumerProjects": [ # Optional. The list of consumer projects that are allow-listed for PSC connections to this instance. This instance can be connected to with PSC from any network in these projects. Each consumer project in this list may be represented by a project number (numeric) or by a project id (alphanumeric).
          "A String",
        ],
        "pscAutoConnections": [ # Optional. The list of settings for requested Private Service Connect consumer endpoints that can be used to connect to this Cloud SQL instance.
          { # Settings for an automatically-setup Private Service Connect consumer endpoint that is used to connect to a Cloud SQL instance.
            "consumerNetwork": "A String", # The consumer network of this consumer endpoint. This must be a resource path that includes both the host project and the network name. For example, `projects/project1/global/networks/network1`. The consumer host project of this network might be different from the consumer service project.
            "consumerNetworkStatus": "A String", # The connection policy status of the consumer network.
            "consumerProject": "A String", # This is the project ID of consumer service project of this consumer endpoint. Optional. This is only applicable if consumer_network is a shared vpc network.
            "ipAddress": "A String", # The IP address of the consumer endpoint.
            "status": "A String", # The connection status of the consumer endpoint.
          },
        ],
        "pscEnabled": True or False, # Whether PSC connectivity is enabled for this instance.
      },
      "requireSsl": True or False, # Use `ssl_mode` instead. Whether SSL/TLS connections over IP are enforced. If set to false, then allow both non-SSL/non-TLS and SSL/TLS connections. For SSL/TLS connections, the client certificate won't be verified. If set to true, then only allow connections encrypted with SSL/TLS and with valid client certificates. If you want to enforce SSL/TLS without enforcing the requirement for valid client certificates, then use the `ssl_mode` flag instead of the legacy `require_ssl` flag.
      "serverCaMode": "A String", # Specify what type of CA is used for the server certificate.
      "sslMode": "A String", # Specify how SSL/TLS is enforced in database connections. If you must use the `require_ssl` flag for backward compatibility, then only the following value pairs are valid: For PostgreSQL and MySQL: * `ssl_mode=ALLOW_UNENCRYPTED_AND_ENCRYPTED` and `require_ssl=false` * `ssl_mode=ENCRYPTED_ONLY` and `require_ssl=false` * `ssl_mode=TRUSTED_CLIENT_CERTIFICATE_REQUIRED` and `require_ssl=true` For SQL Server: * `ssl_mode=ALLOW_UNENCRYPTED_AND_ENCRYPTED` and `require_ssl=false` * `ssl_mode=ENCRYPTED_ONLY` and `require_ssl=true` The value of `ssl_mode` has priority over the value of `require_ssl`. For example, for the pair `ssl_mode=ENCRYPTED_ONLY` and `require_ssl=false`, `ssl_mode=ENCRYPTED_ONLY` means accept only SSL connections, while `require_ssl=false` means accept both non-SSL and SSL connections. In this case, MySQL and PostgreSQL databases respect `ssl_mode` and accepts only SSL connections.
    },
    "kind": "A String", # This is always `sql#settings`.
    "locationPreference": { # Preferred location. This specifies where a Cloud SQL instance is located. Note that if the preferred location is not available, the instance will be located as close as possible within the region. Only one location may be specified. # The location preference settings. This allows the instance to be located as near as possible to either an App Engine app or Compute Engine zone for better performance. App Engine co-location was only applicable to First Generation instances.
      "followGaeApplication": "A String", # The App Engine application to follow, it must be in the same region as the Cloud SQL instance. WARNING: Changing this might restart the instance.
      "kind": "A String", # This is always `sql#locationPreference`.
      "secondaryZone": "A String", # The preferred Compute Engine zone for the secondary/failover (for example: us-central1-a, us-central1-b, etc.). To disable this field, set it to 'no_secondary_zone'.
      "zone": "A String", # The preferred Compute Engine zone (for example: us-central1-a, us-central1-b, etc.). WARNING: Changing this might restart the instance.
    },
    "maintenanceWindow": { # Maintenance window. This specifies when a Cloud SQL instance is restarted for system maintenance purposes. # The maintenance window for this instance. This specifies when the instance can be restarted for maintenance purposes.
      "day": 42, # Day of week - `MONDAY`, `TUESDAY`, `WEDNESDAY`, `THURSDAY`, `FRIDAY`, `SATURDAY`, or `SUNDAY`. Specify in the UTC time zone. Returned in output as an integer, 1 to 7, where `1` equals Monday.
      "hour": 42, # Hour of day - 0 to 23. Specify in the UTC time zone.
      "kind": "A String", # This is always `sql#maintenanceWindow`.
      "updateTrack": "A String", # Maintenance timing settings: `canary`, `stable`, or `week5`. For more information, see [About maintenance on Cloud SQL instances](https://cloud.google.com/sql/docs/mysql/maintenance).
    },
    "passwordValidationPolicy": { # Database instance local user password validation policy # The local user password validation policy of the instance.
      "complexity": "A String", # The complexity of the password.
      "disallowCompromisedCredentials": True or False, # This field is deprecated and will be removed in a future version of the API.
      "disallowUsernameSubstring": True or False, # Disallow username as a part of the password.
      "enablePasswordPolicy": True or False, # Whether the password policy is enabled or not.
      "minLength": 42, # Minimum number of characters allowed.
      "passwordChangeInterval": "A String", # Minimum interval after which the password can be changed. This flag is only supported for PostgreSQL.
      "reuseInterval": 42, # Number of previous passwords that cannot be reused.
    },
    "pricingPlan": "A String", # The pricing plan for this instance. This can be either `PER_USE` or `PACKAGE`. Only `PER_USE` is supported for Second Generation instances.
    "replicationType": "A String", # The type of replication this instance uses. This can be either `ASYNCHRONOUS` or `SYNCHRONOUS`. (Deprecated) This property was only applicable to First Generation instances.
    "settingsVersion": "A String", # The version of instance settings. This is a required field for update method to make sure concurrent updates are handled properly. During update, use the most recent settingsVersion value for this instance and do not try to update this value.
    "sqlServerAuditConfig": { # SQL Server specific audit configuration. # SQL Server specific audit configuration.
      "bucket": "A String", # The name of the destination bucket (e.g., gs://mybucket).
      "kind": "A String", # This is always sql#sqlServerAuditConfig
      "retentionInterval": "A String", # How long to keep generated audit files.
      "uploadInterval": "A String", # How often to upload generated audit files.
    },
    "storageAutoResize": True or False, # Configuration to increase storage size automatically. The default value is true.
    "storageAutoResizeLimit": "A String", # The maximum size to which storage capacity can be automatically increased. The default value is 0, which specifies that there is no limit.
    "tier": "A String", # The tier (or machine type) for this instance, for example `db-custom-1-3840`. WARNING: Changing this restarts the instance.
    "timeZone": "A String", # Server timezone, relevant only for Cloud SQL for SQL Server.
    "userLabels": { # User-provided labels, represented as a dictionary where each label is a single key value pair.
      "a_key": "A String",
    },
  },
  "sqlNetworkArchitecture": "A String", # The SQL network architecture for the instance.
  "state": "A String", # The current serving state of the Cloud SQL instance.
  "suspensionReason": [ # If the instance state is SUSPENDED, the reason for the suspension.
    "A String",
  ],
  "switchTransactionLogsToCloudStorageEnabled": True or False, # Input only. Whether Cloud SQL is enabled to switch storing point-in-time recovery log files from a data disk to Cloud Storage.
  "upgradableDatabaseVersions": [ # Output only. All database versions that are available for upgrade.
    { # An available database version. It can be a major or a minor version.
      "displayName": "A String", # The database version's display name.
      "majorVersion": "A String", # The version's major version name.
      "name": "A String", # The database version name. For MySQL 8.0, this string provides the database major and minor version.
    },
  ],
  "writeEndpoint": "A String", # Output only. The dns name of the primary instance in a replication group.
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
  "acquireSsrsLeaseContext": { # Acquire SSRS lease context. # The context for acquire SSRS lease operation, if applicable.
    "duration": "A String", # Lease duration needed for the SSRS setup.
    "reportDatabase": "A String", # The report database to be used for the SSRS setup.
    "serviceLogin": "A String", # The username to be used as the service login to connect to the report database for SSRS setup.
    "setupLogin": "A String", # The username to be used as the setup login to connect to the database server for SSRS setup.
  },
  "apiWarning": { # An Admin API warning message. # An Admin API warning message.
    "code": "A String", # Code to uniquely identify the warning type.
    "message": "A String", # The warning message.
    "region": "A String", # The region name for REGION_UNREACHABLE warning.
  },
  "backupContext": { # Backup context. # The context for backup operation, if applicable.
    "backupId": "A String", # The identifier of the backup.
    "kind": "A String", # This is always `sql#backupContext`.
  },
  "endTime": "A String", # The time this operation finished in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
    "errors": [ # The list of errors encountered while processing this operation.
      { # Database instance operation error.
        "code": "A String", # Identifies the specific error that occurred.
        "kind": "A String", # This is always `sql#operationError`.
        "message": "A String", # Additional information about the error encountered.
      },
    ],
    "kind": "A String", # This is always `sql#operationErrors`.
  },
  "exportContext": { # Database instance export context. # The context for export operation, if applicable.
    "bakExportOptions": { # Options for exporting BAK files (SQL Server-only)
      "bakType": "A String", # Type of this bak file will be export, FULL or DIFF, SQL Server only
      "copyOnly": True or False, # Deprecated: copy_only is deprecated. Use differential_base instead
      "differentialBase": True or False, # Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base
      "exportLogEndTime": "A String", # Optional. The end timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs until current time will be included. Only applied to Cloud SQL for SQL Server.
      "exportLogStartTime": "A String", # Optional. The begin timestamp when transaction log will be included in the export operation. [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`) in UTC. When omitted, all available logs from the beginning of retention period will be included. Only applied to Cloud SQL for SQL Server.
      "stripeCount": 42, # Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen.
      "striped": True or False, # Whether or not the export should be striped.
    },
    "csvExportOptions": { # Options for exporting data as CSV. `MySQL` and `PostgreSQL` instances only.
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "selectQuery": "A String", # The select query used to extract the data.
    },
    "databases": [ # Databases to be exported. `MySQL instances:` If `fileType` is `SQL` and no database is specified, all databases are exported, except for the `mysql` system database. If `fileType` is `CSV`, you can specify one database, either by using this property or by using the `csvExportOptions.selectQuery` property, which takes precedence over this property. `PostgreSQL instances:` You must specify one database to be exported. If `fileType` is `CSV`, this database must match the one specified in the `csvExportOptions.selectQuery` property. `SQL Server instances:` You must specify one database to be exported, and the `fileType` must be `BAK`.
      "A String",
    ],
    "fileType": "A String", # The file type for the specified uri.
    "kind": "A String", # This is always `sql#exportContext`.
    "offload": True or False, # Option for export offload.
    "sqlExportOptions": { # Options for exporting data as SQL statements.
      "mysqlExportOptions": { # Options for exporting from MySQL.
        "masterData": 42, # Option to include SQL statement required to set up replication. If set to `1`, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to `2`, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than `1`, --set-gtid-purged is set to OFF.
      },
      "parallel": True or False, # Optional. Whether or not the export should be parallel.
      "postgresExportOptions": { # Options for exporting from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. Use this option to include DROP SQL statements. These statements are used to delete database objects before running the import operation.
        "ifExists": True or False, # Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean.
      },
      "schemaOnly": True or False, # Export only schemas.
      "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
        "A String",
      ],
      "threads": 42, # Optional. The number of threads to use for parallel export.
    },
    "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form `gs://bucketName/fileName`. If the file already exists, the request succeeds, but the operation fails. If `fileType` is `SQL` and the filename ends with .gz, the contents are compressed.
  },
  "importContext": { # Database instance import context. # The context for import operation, if applicable.
    "bakImportOptions": { # Import parameters specific to SQL Server .BAK files
      "bakType": "A String", # Type of the bak content, FULL or DIFF.
      "encryptionOptions": {
        "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
        "pvkPassword": "A String", # Password that encrypts the private key
        "pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form `gs://bucketName/fileName`. The instance must have write permissions to the bucket and read access to the file.
      },
      "noRecovery": True or False, # Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
      "recoveryOnly": True or False, # Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
      "stopAt": "A String", # Optional. The timestamp when the import should stop. This timestamp is in the [RFC 3339](https://tools.ietf.org/html/rfc3339) format (for example, `2023-10-01T16:19:00.094`). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
      "stopAtMark": "A String", # Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
      "striped": True or False, # Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
    },
    "csvImportOptions": { # Options for importing data as CSV.
      "columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
        "A String",
      ],
      "escapeCharacter": "A String", # Specifies the character that should appear before a data character that needs to be escaped.
      "fieldsTerminatedBy": "A String", # Specifies the character that separates columns within each row (line) of the file.
      "linesTerminatedBy": "A String", # This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
      "quoteCharacter": "A String", # Specifies the quoting character to be used when a data value is quoted.
      "table": "A String", # The table to which CSV data is imported.
    },
    "database": "A String", # The target database for the import. If `fileType` is `SQL`, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If `fileType` is `CSV`, one database must be specified.
    "fileType": "A String", # The file type for the specified uri. * `SQL`: The file contains SQL statements. * `CSV`: The file contains CSV data. * `BAK`: The file contains backup data for a SQL Server instance.
    "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
    "kind": "A String", # This is always `sql#importContext`.
    "sqlImportOptions": { # Optional. Options for importing data from SQL statements.
      "parallel": True or False, # Optional. Whether or not the import should be parallel.
      "postgresImportOptions": { # Optional. Options for importing from a Cloud SQL for PostgreSQL instance.
        "clean": True or False, # Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
        "ifExists": True or False, # Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel.
      },
      "threads": 42, # Optional. The number of threads to use for parallel import.
    },
    "uri": "A String", # Path to the import file in Cloud Storage, in the form `gs://bucketName/fileName`. Compressed gzip files (.gz) are supported when `fileType` is `SQL`. The instance must have write permissions to the bucket and read access to the file.
  },
  "insertTime": "A String", # The time this operation was enqueued in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "kind": "A String", # This is always `sql#operation`.
  "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
  "operationType": "A String", # The type of the operation. Valid values are: * `CREATE` * `DELETE` * `UPDATE` * `RESTART` * `IMPORT` * `EXPORT` * `BACKUP_VOLUME` * `RESTORE_VOLUME` * `CREATE_USER` * `DELETE_USER` * `CREATE_DATABASE` * `DELETE_DATABASE`
  "selfLink": "A String", # The URI of this resource.
  "startTime": "A String", # The time this operation actually started in UTC timezone in [RFC 3339](https://tools.ietf.org/html/rfc3339) format, for example `2012-11-15T16:19:00.094Z`.
  "status": "A String", # The status of an operation.
  "targetId": "A String", # Name of the database instance related to this operation.
  "targetLink": "A String",
  "targetProject": "A String", # The project ID of the target instance related to this operation.
  "user": "A String", # The email address of the user who initiated this operation.
}