How to Connect
Amazon S3 (Amazon Simple Storage Service ) allows you to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, and inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites.
Note: The AWS channel is only available if you are on the Enterprise Plan.
Visit https://aws.amazon.com/ to learn more.
- On the My pipelines page, click Create a pipeline. Enter name, description, and tag fields and click the Create Pipelines button. The pipelines wizard displays:
- From the wizard you choose the connections for the first steps of your pipeline. When you choose a type, you'll add the channels and steps for your first steps. You can always add more steps later. A Triggered pipeline is started by a specific event in real-time. A Scheduled pipeline will start according to a schedule. A Manual pipeline only starts manually. Once you've completed the wizard, you'll be prompted for connection detail, if necessary. If you are an experienced builder, click the Start from scratch button and then from the right side of the page, choose the All to list all available channels.
- Expand Amazon S3 in the list of channels on the right side of the page and click Connect to Amazon S3.
- In the pop-up window, click Connect to Amazon S3.
- You will be asked to enter your AWS Access Key ID, AWS Server Secret Key and AWS Region as required fields and then click Sign in.
To get your AWS Access Key ID and AWS Server Secret Key:
Go to your Amazon profile. On the upper right side of the page, click on your username dropdown and then select My Security Credentials.
Or just click here
Select Access keys (access key ID and secret access key).
To get your Server Public Key copy the Access Key ID.
To get your Server Secret Key click on Create New Access Key and follow the instructions.
Region isn’t directly specified on a key, instead you specify it through the connection. To find the region codes, click Global in the top left, type in your chosen code in the Pipelines credentials:
For more information about regions, see Regions and Zones.
How to reconnect the Amazon S3 channel
You may need to reconnect your account to a channel. Reasons may be (but not limited to):
If you need to connect a different account.
Authorization updates, such as a changed password.
Editing the access rights that Pipelines has to the channel.
- Select a pipeline that already has Amazon S3 in it.
- Open a step containing Amazon S3.
- Under account, select Connect (or reconnect) and follow the process above, How to connect.
The steps you can use with Amazon S3 fall into one category: Objects
|Action||Upload an Object||
Uploads an Object into S3 Bucket.
|Action||Delete an Object||
Delete the selected object. An object look up should be done in previous step.
|Action||Look Up an Object||
Returns a single Object for the selected account. It is used when you wish to download or transfer the file. Look Up an Object searches for a single object given the bucket’s name containing it and the object’s name as a key. This action will return
Searches the selected account for files and returns a list. Search Object searches for objects given many parameters: id, key, type, bucket, updated at, size, content type, browser url, file transfer handle or (advanced) expression handle. The query will return a list that satisfies the search conditions.
- Currently we do not support working with buckets from different regions.
- Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 TB. The largest object that can be uploaded in a single PUT is 5 GB.
- Bucket ownership is not transferable to another account.
- When you create a bucket, you choose its name and the AWS Region to create it in. After you create a bucket, you can't change its name or Region.
- By default, you can create up to 100 buckets in each of your AWS accounts. If you need additional buckets, you can increase your account bucket limit to a maximum of 1,000 buckets by submitting a service limit increase.
- There is no limit to the number of objects that you can store in a bucket.
Quickbase file transfer to Amazon S3 Bucket
It would be convenient every time we create a record containing a file, to transfer that file to our Amazon S3 Bucket. Let’s create a pipe that checks if the created record is containing an existent file. If the file exists, then it is transferred to our Amazon S3 Bucket. Otherwise the pipe does nothing. In the following example we will show you step by step how to create this pipeline.
Drag and drop the Record Created Trigger from the Quickbase channel and attach it to the pipe.
Account - from your existing Quickbase profile select the auth token which is connected to the QB tables you want to operate with
Table - select existing table from your Quickbase profile, which records contain files
Fields for Subsequent Steps - select the table fields
Select Insert a condition
After this step you should have this fields:
Click on Add conditions → Record and select File->File transfer handle
Set the condition to is set
Connect to your Amazon S3 account. Type your account ID under the “Account” heading and then type the bucket’s name that you would want to use under the “Bucket” heading. After that drag the “File transfer handle” and “File name” bubbles respectively under the “URL” and “File name” headings.
Leave the Else clause empty
After triggering the pipeline in your Activity log you should have:
When you create a record containing a file, the expected result is to transfer that file to the Amazon S3 Bucket.
Transfer CSV file from AWS S3 to Quickbase table
Let’s create a pipeline that gets a CSV file from AWS S3 and uploads it into a Quickbase table. In the following example we will show you step by step how to create this pipeline. The test.csv file contains three columns - firstname, lastname and email.
Connect to your Amazon S3 account. Drag and drop the Look Up an Object Action from the Amazon S3 channel and attach it to the pipeline. Choose your account ID under the “Account” heading and then select the bucket that you would want to use under the “Bucket” heading. Under the “Query” heading select ‘Key’, 'equals' and then fill the name of the csv file you would like to transfer to a QB table. Do not forget to include your file extension as well when specifying the name.
Connect to your Quickbase account. Drag and drop the Import with CSV Action from the Quickbase channel and attach it to the pipeline. Under the “Account” and “Table“ headings select your QB Account, and respectively the target table in QB where the data will be imported. Under the “Merge Field” select “Record ID#“. After that drag the “File transfer handle” bubble under the “CSV URL” heading.
Select the appropriate fields in the QB table to match the columns in the test.csv file in the “Field to map to column“ sections.
After triggering the pipeline in your Activity log you should have:
When you run the pipeline, the expected result is to transfer the information from that csv file to the selected QB table.
Dynamically select the file using jinja
Let say you are building an AWS S3 pipeline and want to dynamically select the file that has the file name suffix that corresponds to today's date (YYYYMMDD), using Jinja. In this example, we have three CSV files in there and want our scheduled pipeline find the file that matches todays date?
We use this jinja in our query field:
Log data from S3 to a Quickbase app
The following video walks through moving S3 log data to an existing Quickbase app: