You can use the Bulk Record Sets feature in the Quickbase channel to more efficiently transfer large numbers of records. When you batch the records, this makes fewer total calls rather than one call per record.
Step runs are counted regardless of success or failure. That is, one step run is billed per record that the step processes even if the overall operation fails. Quickbase is currently unable to validate data and records prior to performing step runs. If you're processing large sets of data, we recommend testing your pipeline with a smaller set first. Because Bulk Upsert is included under the Quickbase channel, it doesn't consume step runs.
Building a bulk record operation
Here are the steps to create bulk record steps:
-
Prepare Bulk Record Upsert
- Define the schema you want to import into. this is the destination of your upsert:
- Table - Select the table you want to use as the destination.
- Fields - Define the fields you want to import into.
- Merge field - If you are updating records, this field is required.
- Define the schema you want to import into. this is the destination of your upsert:
- You will need a Query Step since the Query Step is the source data for the Add a Bulk Upsert Row.
-
Add a Bulk Upsert Row
Note: This step does not make an API call. Instead, records are temporarily stored for the next step.- Bulk Record - This is completed with a reference to the table you selected in the first step and the remaining fields from the table. Define the field values for the record. This step is where you map fields between your source data (query step) and your destination (Prepare Bulk record Upsert step).
-
Commit Upsert
This final step takes the temporarily stored records from Add a Bulk Upsert Row, and upsert the data into the table identified in the Prepare Bulk Record Upsert step.
Note: Prepare Bulk Record Import and Commit Upsert are intended to only be run once outside the context of the loop where you are adding records. Commit Upsert pushes all records that have been added during the pipeline run into the Quickbase table. For example:
Note: The first step above identifies the destination. The query finds the source data, and the Prepare a Bulk Upsert step is your destination, which is why it has the merge field piece. This destination table is where new records would be added, in other words, specifies the destination table for the data and selects the fields that your records will update.
Use case examples
The following are use-case examples of using bulk record sets.
Importing from a CSV
If a resource already exists in a CSV file through a URL or a previous step (through the file transfer URL), you can import records into your Quickbase table in a single step.
To Import with CSV:
- Table - Select the table you want to use.
- Merge field - Use Merge field if you are updating records. If you don't want to update an existing record, you may leave the field blank or use a value that doesn't exist in the table.
- CSV URL - Enter the URL that contains your CSV-formatted data - This can be from an unauthenticated URL, or a file transfer URL from a previous step that handles files (such as box, dropbox).
- Row Separator - the character used to separate your data into columns.
- First Row is List of Field Names - Yes if your data contains a header row.
- Header Row - This defines the structure of the CSV you are providing. If you have a header row in your CSV data, you can copy that.
- Field to map to column *: these options display only when you define your Header Row. Use this to select the corresponding Quickbase field that you intend to map the column to.
- To skip columns - Leave the option blank (don’t map to a Quickbase field)
Child-table records
In this use case, a user wanted to:
- On a trigger, find a set of child-table records connected to a trigger-field value.
- From those child-table records add new records to a child table of the trigger-record
The implementation was:
- On a trigger, set the destination table and fields to be populated
- Search for the records meeting the criteria
- For each record found, create a record in the destination table and set the field values that will be used to create the new records.
Employee utilization
In this use case, we are determining weekly employee utilization. Employees track their time in two ways: customer facing (actions) and internal (tasks). These are both tracked in slightly differently ways and the data we use to create our weekly attestation comes from two different tables.
We use bulk upsert to merge our two tables into one (Utilization), so we can work with a single source of reporting. We scheduled a pipeline to runs every hour, creating a bulk upsert from our Actions table, importing that data, merging/updating anything existing. That same Pipeline then creates an upsert from the Tasks table.