In this article
Purpose
The HTTP channel allows you to connect to APIs, working on HTTP. Those may be APIs we don’t have native support for, or APIs we provide dedicated channels for, but with missing specific rarely-used endpoints.
Generic fields
Тhere is some basic information you should provide:
-
Account name—Unique name to distinguish this account from all other accounts you may use with the channel.
The unique identifier of your account is constructed from the account name, authentication type and base URL. This lets you create different accounts for the same API but with different credentials by choosing a specific account name for each of them. If you try creating a new account using an existing name, base URL, and authentication type, it updates the existing account instead. -
Base URL—Allows the account users to connect only to endpoints, derived from this base URL. This way your data and credentials won’t leak to any other service. The base URL should be a valid URL, containing at least the scheme and host. But it could contain path and even query params.
The endpoint paths you define in your steps may contain whole URLs as well. In this case, the scheme and host should be identical to the base URL scheme and host, and the path should at least begin with the path set in the base URL. they are combined. In this situation, the path’s query params override the Base URL’s params with the same key.
This way you may provide some meaningful defaults through the query params and override them only by need for the specific endpoint. -
HTTP Headers—Contains a list of key value pairs, that are converted to proper HTTP headers with each request made with the given account. Use this to provide additional information through the headers if your API requires it. It is convenient to put them in the connection, as the headers set here are used for each request. For security reasons, the headers set in the connection cannot be overridden by the step. So, if some of your endpoints expect different value for the header, you should put it in the step instead.
Although rarely used, a header with an empty value is a valid construct, so you may provide only key with an empty value and Pipelines will accept it.
You can specify a header with the same key multiple times as well. As per standard, the values will be joined together, separated with a comma, like this:Header-Key: value1, value2, value…
Connection
Most of the APIs require authentication to be used. In our connection dialog, you can provide information about the authentication type, as well as the needed credentials and Pipelines handles the authentication for you
As with all other channels, we store your credentials in a secure way.
The channel supports the following authentication types:
-
No Authentication
-
Basic Authentication
-
Bearer Token
-
API Key
-
JWT
-
OAuth 2.0
No Authentication
Select this if your API doesn’t require authentication. You should still create an account, providing at least base URL for the API and optionally any HTTP headers your API may require.
Basic Authentication
The basic access authentication is a simple authentication method, using a username and password.
The produced header looks like this:
Authorization: Basic <credentials>
, where <credentials>
is the Base64 encoding of username and password joined by a single colon.
You need to provide your username and password. Quickbase automatically updates the format.
Bearer Token
The bearer token authentication’s header looks like this:
Where <token>
is the bearer token you provide in the connection dialog, without any transformations.
API Key
Use this option if your API requires a custom header for authorization or integrates the authorization key in the query parameters of the URL.
The resulting header looks like this:
<API Key Name>: <API Key Value>
without any transformations.
By default, the provided name and value pair is converted to the header, but if your API requires otherwise, there is an option to attach them as query parameters to the request URL. The API Key query param cannot be overridden by the path or query parameters field in the step. In this situation, the pipeline returns a validation error and won’t make a request to the remote endpoint.
JWT
The JWT authentication method is similar to the Bearer token auth, but in this case pipelines generates the JSON Web Token for you, based on the provided parameters.
The final result is a header like the following:
Authorization: Bearer <token>
where <token> is the generated JWT.
You may provide parameters to allow Pipelines to generate the JWT for you including:
-
Signing algorithm—This is the algorithm used to sign the JWT key. We support both symmetric and asymmetric algorithms. What you use depends on the requirements of your API. It is valid to create a not signed JWT, so this field is optional, not recommended though.
-
Signing key—When a signing algorithm is selected, this field is required. The format depends on the type of the algorithm you selected above. For asymmetric algorithms, Pipelines expects a private key in PEM format, and for the symmetric algorithms, the key should be just a sufficiently long string.
-
Additional JWT headers—Provide additional JWT headers as a JSON object. These may be headers like:
-
jku— URL, referring to a resource for a set of JSON-encoded public keys, one of which corresponds to the key used to digitally sign the JWS
-
jwk— The public key that corresponds to the key used to digitally sign the JWS
-
kid— Key ID is a hint indicating which key was used to secure the JWS
-
The headers like those specified above are defined in the standards for JWT /RFC 7519: JSON Web Token (JWT) , JWS /
RFC 7515: JSON Web Signature (JWS) and JWE /
RFC 7516: JSON Web Encryption (JWE) .
The additional headers may also be used to replicate some of the claims in an unencrypted form. This might be used from the application to determine how to process the JWT before it is decrypted. Detailed information about this usage may be found here /RFC 7519: JSON Web Token (JWT)
-
JWT Claims—You can provide JWT claims as a JSON object. Refer to the detailed documentation for the allowed claims here /
RFC 7519: JSON Web Token (JWT) or check the requirements of your API. Keep in mind that Pipelines will add expiration (exp) and Issued At (iat) claims for you if you fulfill the corresponding fields in the form.
-
Use Issued At (iat) claim?—If this option is checked, pipelines adds iat claim to the payload for you. The token is generated prior to each request, so iat points to a time shortly before the request itself.
-
Expiration time (exp) claim—Set the expiration time of the generated token. The default is 5 minutes. If you put 0 in the field, an exp claim is not added and the token never expires (not recommended). Otherwise, you may put the number of seconds until expiration like 3600 for an hour or a string like 2m / 4h / 2h30m / 1d3h / 1d3h20m.
OAuth 2.0
Pipelines guides you through the whole flow, prompting you to authenticate against your identity provider (when using the authorization code grant) and will store the access token and the refresh token, when present. If a refresh token is present, Pipelines will authenticate you with each request with the stored access token and will refresh the token, when needed.
For this to work, you need to provide the account with the following information:
-
Authorization grant type—One of Client credentials and Authorization code grants.The one you use depends on the requirements of the API you’re trying to connect.
-
Authorization Endpoint—This is needed only for the authorization code grant. Pipelines forwards you to this endpoint, where you need to authenticate and grant access to pipelines.
-
Access Token Endpoint—This is the endpoint pipelines uses to obtain an access and, optionally, a refresh token.
-
Client identifier—You should register pipelines as a client at your API’s authorization server and populate the generated client identifier here. This way pipelines can identify itself properly against your identity provider. The callback URL to add to the registered client is: https://www.pipelines.quickbase.com/authorize
-
Client Secret—Among the client identifiers, client secret is also generated.Copy it into this field, so -pipelines can identify itself against your identity provider. This field may not be required by your API.
-
Scope—Scope of the access request. The value should be a list of space-delimited, case-sensitive strings, defined by your authorization server. This field is optional.
Make request step
Use the Make a request step to make a request to a chosen API endpoint. The step provides you with the following options:
HTTP Method
This should be one of GET / POST / PUT / PATCH / DELETE / OPTIONS / HEAD. If nothing is provided, GET is set by default. The documentation of your API documents what method requires the given endpoint. If the API is following the REST conventions, GET is used to obtain a resource, POST is used to create a resource, PUT and PATCH are used to update an existing resource. PUT expects you to provide all properties including ones that are not altered, while PATCH allows you to send only the changes made. DELETE is used to delete a resource. OPTIONS and HEAD are rarely used in this context.
PATH
This is the path to your endpoint. It may be relative to the Base URL, set in the connection or a full URL. If you provide a full URL it should conform with the base URL. This means that the schema and host should be identical, while the path should be prefixed or identical to the path in the Base URL. If query params are present in the path, they will be joined with those from the base URL overriding params with the same name.
HTTP Headers
This is a list of key value pairs, transformed into HTTP headers and attached to the request. You cannot override the headers set in the connection. The headers defined here are added to those defined in the connection and are sent only for this specific step.
Like in the connection, you may set headers with empty values, as well as multiple values for the same header. Additionally, you may use jinja expressions in the step’s HTTP headers field.
Query Params
This is a list of key value pairs. Pipelines attaches them as query parameters to the request’s URL. The step query parameters take precedence over the path and the base URL. They will replace parameters with the same key and attach everything else to the already present params. You may use jinja here as well.
Expected payload type
Quickbase supports natively JSON and YAML payloads in the response and parses them for you. If the response contains textual data that isn't in any of those formats, you should select TEXT from the select field instead. We return the result as is without any additional parsing. If nothing is selected, JSON is inferred.
Schema sample type
Use this field to advise pipelines about the response’s schema. This field is not present if you’ve chosen TEXT above. You could achieve this by providing an example response (sample) or directly using json / yaml schema from the documentation of your API. The value selected here guides pipelines on how to treat the input in the next field.
The format of the schema / sample doesn’t influence the expected payload type. There are APIs that may return different formats based on configuration, so it works to provide a YAML schema here and then to receive a response in JSON.
To enhance usability, Pipelines utilizes a non-restrictive schema. This is pаrticularly helpful in the Quick Reference, where all schema fields with their type are listed, enabling their use in subsequent steps.
Pipelines does not enforce strict adherence to the schema. This flexibility allows you to work with responses that may include additional properties or return data types that differ from those defined in your sample.
The default expectation, if you don’t select a value, is JSON sample.
Schema sample
This is the actual schema or sample if the expected payload type is not TEXT. Pipelines parses it for you and creates a best-guess structured schema of the expected response. Even if a field is not present in the resulting schema, or has a different type, Pipelines continues to work.
Validate response payload type
Given the expected payload type is not TEXT, the response payload type is validated by default. If this option is checked or left empty, Pipelines will result in an error if the response is not a valid JSON or YAML. Otherwise, it proceeds with the execution even if the response is not a valid object.
This is validating only the format and not the content. Any violations of the schema won’t result in an error, even if this option is selected.
Body
If the request method is one of POST / PUT / PATCH / DELETE, you may provide request body as well. The contents of the body are up to you and the requirements of the API you are connecting. Although it is permitted to have a body attached to a DELETE request, it is not recommended and should be used only if explicitly required by the API.
Errors
Error handling policy is specified for each step individually and applied only for the execution of that step.
Three options to select the error handling mechanism for the step:
-
Automatic—This is the default built-in error handling.
-
Custom—You may list only those statuses that need to be excluded from the standard error handling and will be handled separately. The list contains all standard status codes in the range 4xx and 5xx. If your API uses any of the unassigned status codes in those ranges, you can add them manually - custom errors.
When returned by the API, the statuses selected in the field are treated as success - the pipeline does not terminate, proceeds to the next step the same way it does with a successful status. You may introduce logic, using the status code and message to decide how to handle the result. -
None—Choosing this option completely turns off the automatic behavior and allows your pipeline to continue no matter what status code returns your API, every code is regarded as success.
This option changes the behavior based on the returned response from your API. Even if you have choosen None, the pipeline will still fail if there is another internal error.
There are certain types of errors which will always be handled by the built-in mechanism e.g. connectivity, timeout, payload size and payload type kind of errors. For these error there will be no way to have custom error handling inside the pipeline. The pipeline will terminate if the response has a status within the 4xx range and will retry the request on responses in the 5xx range, as well for 429 status (Too Many Requests).
Limits
The Make request step works only with text responses, not with binary data.
The maximum response content size is 1 MB.
The maximum request content size is 1 MB.
Timeouts are 10 seconds for connection and 5 minutes for read.
Setting the following headers manually is not allowed: Transfer-Encoding, Content-Encoding, or Content-Length.
Additional Fields
Encoding
If your request is POST / PUT / PATCH / DELETE, this is the encoding of the request’s body. UTF-8 is set by default.