Integration Flows

The "flow" is the primitive concept in just about every integration technology. A flow contains a series of steps, usually composed of pre-built functional components, that do the job of retrieving some data from System A, transforming it, and pushing it to System B. Certainly flows can be more complicated, but this is the basic premise.

Common names for this concept include:

  • Data Flow
  • Flow
  • Workflow
  • Sequence
  • Recipe

Conceptually, regardless of what an interation system calls it, the concept is fairly universal. We'll use the terms "flow" and "integration flow" as general terms in this documentation.

Integration Flow Basics

This section describes the general flow concepts that exist in basically every integration platform.

Components and Integration Flows

Trigger

An integration flow is a series of actions that usually includes interaction with external APIs and data transformation at a minimum. However, for those actions to execute, they must be told to execute. Flows typically include one or more triggers that defines when the steps should run.

The two most common triggers are event-based (usually receipt of a webhook messages) or scheduled (automatically runs on some configured interval).

Flow Steps

Flows aren't usually one block of executable code that does the entire job. They are typically composed of various steps, often implemented with reusable and configurable components. These components are specialized bits of code that do one or a few things (e.g. talk to REST APIs, transform to XML, or split records).

The choreography of steps in a flow in a certain order is what implements the business logic of a flow.

Data Payload

When a flow is triggered, the flow "does its job" by passing a data payload through configured component steps according to the logic dictated by the flow. Each step manipulates the data payload in some way, sometimes replacing it completely. Consider the following example:

  • Step 1: Receive a webhook message (which becomes the data payload) and inject some metadata into it.
  • Step 2: Filter out data payloads that don't meet a certain criteria. Those which do not meet the criteria do not continue.
  • Step 3: Transform the data payload into the object model for the target system.
  • Step 4: Perform an HTTP POST to push the data payload to the target system. The API response replaces the payload that came into this step.
  • Step 5: Execute code that logs the response code.

Each step manipulates the data payload in a specific way, and collectively, they implement an end-to-end integration flow.

Integration Flows in a Software Product

It's a general best practicce to use some kind integration platform to power your native software integrations. However, most integration platforms are built for users to produce one bespoke integration at a time. When embedding integrations into a software product, the requirement is fundamentally more complex.

In a software product integration, it's not enough to deploy one instance of a flow one time for one user. (The exception is if you are building something highly bespoke, often for an enterprise customer.)

Instead, you must think about building a reusable integration that scales to many users. Your objective is to deploy the same data flow (or almost the same data flow) to every customer who wants to enable that feature. For example, if your integration is built to push Customer records to Salesforce CRM, you want to do that exactly the same for every one of the dozens, hundreds, or thousands of users who activate this feature.