From the course: Build a No-Code ETL Pipeline with Google BigQuery
Unlock this course with a free trial
Join today to access over 24,800 courses taught by industry experts.
How data load will work - BigQuery Tutorial
From the course: Build a No-Code ETL Pipeline with Google BigQuery
How data load will work
- [Instructor] Let's look at how our pipeline will manage the data load, meaning taking our data and putting it into BigQuery. This is a general overview, so don't worry about the details. We'll go through each step in detail in the following videos. The first step is to download the dataset from Kaggle. We will be using a comprehensive stock market dataset that is updated daily. Once the data is downloaded, we will store it in Google Cloud Storage. This is a cloud storage service. It's like your computer's file system, but in the cloud. Next, we will create an empty table in BigQuery with the correct schema for our data, and this table will be set up to receive our data regularly and automatically. This is like building the house in which our data will live. Finally, we will create a data transfer configuration using BigQuery Data Transfer Service. This configuration will handle moving our data from Google Cloud Storage into BigQuery in an automated and scalable manner. And that's…
Contents
-
-
-
-
(Locked)
How data load will work1m 40s
-
(Locked)
Introduction to data4m 44s
-
(Locked)
What is Google Cloud Storage?2m 42s
-
(Locked)
Put data in Google Cloud Storage3m 35s
-
(Locked)
Create table in BigQuery4m 35s
-
(Locked)
Introduction to BigQuery Data Transfer Service1m 43s
-
How we will manage data6m 4s
-
(Locked)
Use Transfer Service to ingest data6m 40s
-
(Locked)
Schedule transfers with Transfer Service3m 19s
-
(Locked)
Identify data transfer issues6m 18s
-
(Locked)
Common issues with data transfer5m 21s
-
(Locked)
-
-
-