See the Channel Catalog to see which Plans have access to this channel.
How to Connect
Amazon S3 (Amazon Simple Storage Service ) allows you to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, and inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites.
See the Channel Catalog to see which Plans have access to this channel.
Note: The AWS channel is only available if you are on the Enterprise Plan.
Visit https://aws.amazon.com/ to learn more.
How to Connect
- On the My pipelines page, select Create Pipeline.
- Search for or select a step, and then select it to add it to the pipeline.
When you add a step to a pipeline, it is added to the canvas of the pipeline designer. - Expand the Connection section of the step, and add the required information.
For more information about connections, see How to connect to a channel.
Connect to Amazon S3
- Expand Amazon S3 in the list of channels on the right side of the page and click Connect to Amazon S3.
- In the pop-up window, click Connect to Amazon S3.
- You will be asked to enter your AWS Access Key ID, AWS Server Secret Key and AWS Region as required fields and then click Sign in.
To get your AWS Access Key ID and AWS Server Secret Key:
-
Go to your Amazon profile. On the upper right side of the page, click on your username dropdown and then select My Security Credentials.
Or just click here -
Select Access keys (access key ID and secret access key).
-
To get your Server Public Key copy the Access Key ID.
-
To get your Server Secret Key click on Create New Access Key and follow the instructions.
Regions
Region isn’t directly specified on a key, instead you specify it through the connection. To find the region codes, click Global in the top left, type in your chosen code in the Pipelines credentials:
For more information about regions, see Regions and Zones.
How to reconnect
You may need to reconnect your account to a channel. Reasons may be (but not limited to):
- If you need to connect a different account.
- Authorization updates, such as a changed password.
- Editing the access rights that Pipelines has to the channel.
To reconnect:
- Select a pipeline that already has this channel in it.
- Open a step that contains this channel.
- Under account, select Connect (or reconnect) and follow the process above, How to connect.
Steps
The steps you can use with Amazon S3 fall into one category: Objects
Type | Name | Description |
---|---|---|
Objects | ||
Action | Upload an Object |
Uploads an Object into S3 Bucket. The limit for the object is 100MB.
|
Action | Delete an Object |
Delete the selected object. An object look-up should be done in the previous step. |
Action | Look Up an Object |
Returns a single Object for the selected account. It is used when you wish to download or transfer a file. Look Up an Object searches for a single object given the bucket’s name containing it and the object’s name as a key. This action will return |
Query | Search Objects |
Searches the selected account for files and returns a list. Search Object searches for objects given many parameters: id, key, type, bucket, updated at, size, content type, browser url, file transfer handle or (advanced) expression handle. The query will return a list that satisfies the search conditions. |
Limits
- Currently we do not support working with buckets from different regions.
- Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 100MB. The largest object that can be uploaded is 100MB.
Use Cases
Quickbase file transfer to Amazon S3 Bucket
It would be convenient every time we create a record containing a file, to transfer that file to our Amazon S3 Bucket. Let’s create a pipe that checks if the created record contains an existent file. If the file exists, then it is transferred to our Amazon S3 Bucket. Otherwise, the pipe does nothing. In the following example, we will show you step-by-step how to create this pipeline.
-
Drag and drop the Record Created Trigger from the Quickbase channel and attach it to the pipe.
Account - from your existing Quickbase profile select the auth token which is connected to the QB tables you want to operate with
Table - select an existing table from your Quickbase profile, which records contain files
Fields for Subsequent Steps - select the table fields -
Select Insert a condition
After this step you should have this fields: -
Click on Add conditions → Record and select File->File transfer handle
Set the condition to is set -
Connect to your Amazon S3 account. Type your account ID under the “Account” heading and then type the bucket’s name that you would want to use under the “Bucket” heading. After that drag the “File transfer handle” and “File name” bubbles respectively under the “URL” and “File name” headings.
-
Leave the Else clause empty
-
After triggering the pipeline in your Activity log you should have:
When you create a record containing a file, the expected result is to transfer that file to the Amazon S3 Bucket.
Transfer CSV file from AWS S3 to Quickbase table
Let’s create a pipeline that gets a CSV file from AWS S3 and uploads it into a Quickbase table. In the following example, we will show you step-by-step how to create this pipeline. The test.csv file contains three columns - firstname, lastname and email. This could be any kind of file, Excel, etc., but in this case, it is a CSV file.
-
Connect to your Amazon S3 account. Drag and drop the Look Up an Object Action from the Amazon S3 channel and attach it to the pipeline. Choose your account ID under the “Account” heading and then select the bucket that you would want to use under the “Bucket” heading. Under the “Query” heading select ‘Key’, 'equals' and then fill in the name of the CSV file you would like to transfer to a QB table. Do not forget to include your file extension as well when specifying the name.
-
Connect to your Quickbase account. Drag and drop the Import with CSV Action from the Quickbase channel and attach it to the pipeline. Under the “Account” and “Table“ headings select your QB Account, and respectively the target table in QB where the data will be imported. Under the “Merge Field” select “Record ID#“. After that drag the “File transfer handle” bubble under the “CSV URL” heading.
Select the appropriate fields in the QB table to match the columns in the test.csv file in the “Field to map to column“ sections. -
After triggering the pipeline in your Activity log you should have:
When you run the pipeline, the expected result is to transfer the information from that csv file to the selected QB table.
Dynamically select the file using jinja
Let say you are building an AWS S3 pipeline and want to dynamically select the file that has the file name suffix that corresponds to today's date (YYYYMMDD), using Jinja. In this example, we have three CSV files in there and want our scheduled pipeline find the file that matches todays date.
- logdata_20211109.csv
- logdata_20211108.csv
- logdata_20211107.csv
We use this jinja in our query field:
logdata_{{time.today|date_ymd('')}}.csv
Like this:
Log data from S3 to a Quickbase app
The following video walks through moving S3 log data to an existing Quickbase app: