Your retail company wants to predict customer churn using historical purchase data stored in BigQuery. The dataset includes customer demographics, purchase history, and a label indicating whether the customer churned or not. You want to build a machine learning model to identify customers at risk of churning. You need to create and train a logistic regression model for predicting customer churn, using the customer_data table with the churned column as the target label. Which BigQuery ML query should you use?
A.
-------------------------
B.
-------------------------
C.
-------------------------
D.
-------------------------
Another team in your organization is requesting access to a BigQuery dataset. You need to share the dataset with the team while minimizing the risk of unauthorized copying of data. You also want to create a reusable framework in case you need to share this data with other teams in the future. What should you do?
A.
Create authorized views in the team’s Google Cloud project that is only accessible by the team.
B.
Create a private exchange using Analytics Hub with data egress restriction, and grant access to the team members.
C.
Enable domain restricted sharing on the project. Grant the team members the BigQuery Data Viewer IAM role on the dataset.
D.
Export the dataset to a Cloud Storage bucket in the team’s Google Cloud project that is only accessible by the team.
Your company has developed a website that allows users to upload and share video files. These files are most frequently accessed and shared when they are initially uploaded. Over time, the files are accessed and shared less frequently, although some old video files may remain very popular.
You need to design a storage system that is simple and cost-effective. What should you do?
A.
Create a single-region bucket with Autoclass enabled.
B.
Create a single-region bucket. Configure a Cloud Scheduler job that runs every 24 hours and changes the storage class based on upload date.
C.
Create a single-region bucket with custom Object Lifecycle Management policies based on upload date.
D.
Create a single-region bucket with Archive as the default storage class.
You recently inherited a task for managing Dataflow streaming pipelines in your organization and noticed that proper access had not been provisioned to you. You need to request a Google-provided IAM role so you can restart the pipelines. You need to follow the principle of least privilege. What should you do?
A.
Request the Dataflow Developer role.
B.
Request the Dataflow Viewer role.
C.
Request the Dataflow Worker role.
D.
Request the Dataflow Admin role.
You need to create a new data pipeline. You want a serverless solution that meets the following requirements:
• Data is streamed from Pub/Sub and is processed in real-time.
• Data is transformed before being stored.
• Data is stored in a location that will allow it to be analyzed with SQL using Looker.

Which Google Cloud services should you recommend for the pipeline?
A.
1. Dataproc Serverless 2. Bigtable
B.
1. Cloud Composer 2. Cloud SQL for MySQL
C.
1. BigQuery 2. Analytics Hub
D.
1. Dataflow 2. BigQuery