Cloud Storage
Cloud SQL
Cloud SQL High Availability
The HA configuration, sometimes called a cluster, provides data redundancy. A Cloud SQL instance configured for HA is also called a regional instance and is located in a primary and secondary zone within the configured region. Within a regional instance, the configuration is made up of a primary instance and a standby instance. Through synchronous replication to each zone’s persistent disk, all writes made to the primary instance are also made to the standby instance. In the event of an instance or zone failure, this configuration reduces downtime, and your data continues to be available to client applications.
Note: The standby instance cannot be used for read queries. This differs from the Cloud SQL for MySQL legacy HA configuration.

Cloud Spanner

Cloud Spanner - Schema Design
You should be careful when choosing a primary key to not accidentally create hotspots in your database. One cause of hotspots is having a column whose value monotonically increases as the first key part, because this results in all inserts occurring at the end of your key space. This pattern is undesirable because Cloud Spanner divides data among servers by key ranges, which means all your inserts will be directed at a single server that will end up doing all the work.
A common technique for spreading the load across multiple servers is to create a column that contains the hash of the actual unique key, then use the hash column (or the hash column and the unique key columns together) as the primary key. This pattern helps avoid hotspots, because new rows are spread more evenly across the key space.
You can also use a Universally Unique Identifier (UUID) as the primary key. Version 4 UUID is recommended, because it uses random values in the bit sequence.
Cloud Spanner - Secondary Indexes
In a Cloud Spanner database, Spanner automatically creates an index for each table’s primary key.
You can also create secondary indexes for other columns. Adding a secondary index on a column makes it more efficient to look up data in that column.
For example, if a lookup is done within a read-write transaction, the more efficient lookup also avoids holding locks on the entire table, which allows concurrent inserts and updates to the table for rows outside of the lookup range.
In addition to the benefits they bring to lookups, secondary indexes can also help Spanner execute scans more efficiently, enabling index scans rather than full table scans.
Cloud Spanner - Transactions
A transaction in Cloud Spanner is a set of reads and writes that execute atomically at a single logical point in time across columns, rows, and tables in a database.
Cloud Spanner supports these transaction modes:
Read outside of Transactions
Cloud Spanner allows you to determine how current the data should be when you read data by offering two types of reads:
Cloud Firestore
Cloud Firestore - Usage
Cloud Firestore - Data Model

Cloud Firestore - Indexes
An index is defined on a list of properties of a given entity kind, with a corresponding order (ascending or descending) for each property.
REF: https://cloud.google.com/datastore/docs/concepts/indexes
When the same property is repeated multiple times, Firestore in Datastore mode can detect exploding indexes and suggest an alternative index. However, in all other circumstances, a Datastore mode database will generate an exploding index. In this case, you can circumvent the exploding index by manually configuring an index in your index configuration file:
https://cloud.google.com/datastore/docs/concepts/indexes#index_limits
Cloud Firestore - Managed Exports
With the managed export and import service, you can recover from accidental deletion of data and export data for offline processing. You can export all entities or just specific kinds of entities. Likewise, you can import all data from an export or only specific kinds.
BigQuery supports loading data from Datastore exports created using the Datastore managed import and export service. You can use the managed import and export service to export Datastore entities into a Cloud Storage bucket. You can then load the export into BigQuery as a table.
To run exports on a schedule, we recommend using Cloud Functions and Cloud Scheduler. Create a Cloud Function that initiates exports and use Cloud Scheduler to run your function.
REF: https://cloud.google.com/datastore/docs/export-import-entities
Cloud Memorystore
Comparing Storage Options

Storage Transfer Service
Other Google Cloud transfer options include:
Options for transferring storage
Storage Transfer Service: Moving large amounts of data is seldom as straightforward as issuing a single command. You have to deal with issues such as scheduling periodic data transfers, synchronizing files between source and sink, or moving files selectively based on filters. Storage Transfer Service provides a robust mechanism to accomplish these tasks.
gsutil: For one-time or manually initiated transfers, you might consider using gsutil, which is an open source command-line tool that is available for Windows, Linux, and Mac. It supports multi-threaded transfers, processed transfers, parallel composite uploads, retries, and resumability.
Transfer Appliance: Depending on your network bandwidth, if you want to migrate large volumes of data to the cloud for analysis, you might find it less time consuming to perform the migration offline by using the Transfer Appliance.
