Destinations
Flowlyze supports two distinct modes for managing data destinations: active and passive, providing high flexibility in integration with external systems and in defining processing strategies.
There are two destination management modes:
- Active (push): Flowlyze sends data directly to the destination and guarantees delivery.
- Passive (pull): Flowlyze makes data available through a queue, and an external system retrieves it autonomously.
The choice between the two modes depends on integration needs and the responsibility you want to assign to each system.
Active Management
In active management, Flowlyze sends data directly to the configured destination and ensures correct delivery.
The system monitors each send, handles any retry attempts, and applies confirmation mechanisms to ensure the data actually reaches the target. In this mode, Flowlyze acts as a component responsible for transport reliability, reducing complexity for external systems.
Passive Management
In passive management, Flowlyze acts as a data producer and makes data available through a queue or similar mechanism. Messages are imported and kept waiting until an external system processes them autonomously, following a model similar to the publisher-subscriber pattern.
The system that consumes the data performs reads accompanied by an acknowledge, through which it notifies successful receipt to ensure reliable flow and correct message state management.
HTTP Destination
Flowlyze's HTTP destination allows sending data to external systems via HTTP requests, one for each message in the queue or one per message block. The request is built based on configured parameters (method, endpoint, headers, query string) while the body matches the JSON payload derived from the single record.
The operation outcome is determined exclusively by the status code returned by the destination system: any value between 200 and 299 is considered a success.
Any HTTP status code between 200 and 299 is considered a success. All other codes (3xx, 4xx, 5xx) are treated as errors and trigger the configured retry policies.
Configurable Parameters
The HTTP destination provides the following parameters, all modifiable:
- URL: destination endpoint of the call.
- Method: HTTP method used (e.g. POST, PUT, PATCH, DELETE).
- Headers: set of custom headers to include in the request.
- Query Params: query parameters added to the URL.
All string-type fields support the use of variables, both global and local, allowing dynamic configuration based on the operational context.
Authentication
The HTTP destination supports the same authentication protocols available for the HTTP source (see dedicated section), allowing consistent alignment of security modes.
Planned Features (TBD)
Evolutions are planned to extend the flexibility and transformation capabilities of the HTTP destination:
- Using message fields as dynamic templates: Applying message values to URL, query, and header parameters, with direct substitution during request generation.
- Body rewriting: Ability to define a custom body without going through low-code components.
- Advanced response interpretation: Analysis of the response body content to determine the operation outcome, going beyond the model based exclusively on status code.
- Batch message sending: Sending multiple entities in a single call, with definition of the aggregated data model and ability to interpret the response to evaluate the outcome of each message, similar to the Custom Adapter behavior but without the need for a predefined protocol.
RDBMS Destination
The RDBMS destination allows Flowlyze to connect to any relational database accessible publicly via the Internet, without dependencies on a specific vendor. It is designed to implement data synchronization, migration, consolidation, or bidirectional integration scenarios between systems.
Connection Parameters
To establish the database connection, Flowlyze requires the following parameters:
- Server URL
- Port
- Username
- Password
The connection correctness can be verified directly through the test function available in the configuration screen.
Writing Query Definition
Flowlyze offers very granular management of SQL operations thanks to three types of queries that cover the entire processing cycle:
The three phases (PREPARE → INSERT → FINALIZE) allow implementing complex logic:
- PREPARE: environment preparation (temporary tables, cleanup)
- INSERT: data writing for each record
- FINALIZE: consolidation and final operations (upsert, stored procedures)
INSERT STATEMENT
Executed for each row of the dataset. Allows using a parametric SQL statement, where record fields are accessible via:
@fieldname@fieldname_subfieldfor nested objects
Generally this statement is used to execute INSERT, but it can be any valid SQL command.
PREPARE STATEMENT
Executed before processing a batch.
Allows preparatory operations such as:
- creating temporary tables,
- cleaning or initializing data structures,
- generating intermediate environments for writing.
FINALIZE STATEMENT
Executed after inserting all rows of the batch.
Typically used for:
- consolidating data into final tables,
- executing upserts,
- calling stored procedures,
- performing recalculations or normalization operations.
Example 1 — Writing in insert or update mode
This example illustrates a customer synchronization flow where a JSON message is transformed into a row of the relational table **CUSTOMER**. Flowlyze executes the query for each record in the queue, mapping message fields to table fields. The goal is to demonstrate how a simple integration can still support more complex logic such as upsert (insert or update).
Data Structure (Mermaid diagram)
In this model:
- The customer is represented by a single
CUSTOMERtable. - Address data is flattened into dedicated columns (
ADDRESSSTREET,ADDRESSCITY, etc.). - All demographic and preference information is centralized in a single record to simplify management.
Intent and Motivations of the Example
The goal of the example is to show how a single JSON message:
{
"customer_id": "CUST-482917",
"name": "Jordan Mitchell",
"date_of_birth": "1987-03-12",
"email": "jordan.mitchell@examplemail.com",
"phone": "(415) 555-2894",
"address": {
"street": "742 Market Street",
"unit": "Apt 5B",
"city": "San Francisco",
"state": "CA",
"zip": "94103"
},
"account_status": "Active",
"signup_date": "2023-08-19",
"preferred_contact_method": "Email",
"marketing_opt_in": true,
"notes": "Customer prefers weekend delivery windows."
}
is converted into a row of the CUSTOMER table via the following SQL:
INSERT INTO CUSTOMER (
CUSTOMERID,
NAME,
DATEOFBIRTH,
EMAIL,
PHONE,
ADDRESSSTREET,
ADDRESSUNIT,
ADDRESSCITY,
ADDRESSSTATE,
ADDRESSZIP,
ACCOUNTSTATUS,
SIGNUPDATE,
PREFERREDCONTACTMETHOD,
MARKETINGOPTIN,
NOTES
) VALUES (
@customer_id,
@name,
@date_of_birth,
@email,
@phone,
@address__street,
@address__unit,
@address__city,
@address__state,
@address__zip,
@account_status,
@signup_date,
@preferred_contact_method,
@marketing_opt_in,
@notes
);
Key Mapping Points:
- Message fields are mapped 1:1 to table columns.
- Nested values are referenced via dot notation (
@address__street). - The "flat" model allows representing all the most relevant data in a single entity.
To access nested fields in JSON, use the double underscore notation: @object__field. For example, @address__street accesses address.street in the JSON message.
Example 2 — Writing in Transactional Mode
This example shows an order synchronization flow where a single order message is decomposed into multiple relational entities (customer, addresses, order header, order lines) and written in a coherent and transactional manner. The goal is to ensure that all data related to an order is inserted/updated together, or all discarded in case of error.
All operations are executed within a single database transaction. If any step fails, the entire transaction is rolled back, ensuring data consistency.
Data Structure
How to Read the Model:
CUSTOMERrepresents the customer placing the order (demographic data and contact preferences).ORDER_HEADERis the order header (date, status, payments, amounts).ORDER_ITEMrepresents individual order lines (purchased items).ADDRESScontains addresses linked to the order (shipping and billing), associated with both the customer and the order.
The TMP_CUSTOMER, TMP_ORDER_HEADER, TMP_ORDER_ITEM, and TMP_ADDRESS tables seen in the SQL are the temporary version of these same entities: they serve as an intermediate staging area, used to build a coherent dataset before consolidation into the main tables (CUSTOMER, ORDER_HEADER, ORDER_ITEM, ADDRESS).
Order Example (Input Message)
{
"order_id": "ORD-984532",
"order_date": "2025-02-11T14:23:00Z",
"customer": {
"customer_id": "CUST-482917",
"first_name": "Jordan",
"last_name": "Mitchell",
"email": "jordan.mitchell@examplemail.com",
"phone": "+1-415-555-2894",
"alt_phone": "+1-415-555-7331",
"contact_preferences": {
"preferred_method": "Email",
"allow_sms": true,
"allow_calls": false
}
},
"items": [
{
"item_id": "SKU-10482",
"description": "Wireless Bluetooth Headphones",
"quantity": 1,
"unit_price": 89.99
},
{
"item_id": "SKU-55720",
"description": "USB-C Charging Cable 2m",
"quantity": 2,
"unit_price": 12.49
},
{
"item_id": "SKU-88410",
"description": "Laptop Sleeve 13-inch",
"quantity": 1,
"unit_price": 24.99
}
],
"shipping_address": {
"street": "742 Market Street",
"unit": "Apt 5B",
"city": "San Francisco",
"state": "CA",
"zip": "94103"
},
"billing_address": {
"street": "742 Market Street",
"unit": "Apt 5B",
"city": "San Francisco",
"state": "CA",
"zip": "94103"
},
"payment_method": "Credit Card",
"payment_status": "Paid",
"order_status": "Processing",
"currency": "USD",
"shipping_cost": 7.99,
"total_amount": 147.96,
"notes": "Leave the package at the building concierge."
}
Intent and Motivations of the Example (SQL Reading)
The provided SQL implements a multi-step transactional write, aligned with the PREPARE / INSERT / FINALIZE model of the RDBMS destination.
Temporary Table Creation (PREPARE Phase)
CREATE TEMPORARY TABLE TMP_ADDRESS (...);
CREATE TEMPORARY TABLE TMP_CUSTOMER (...);
CREATE TEMPORARY TABLE TMP_ORDER_HEADER (...);
CREATE TEMPORARY TABLE TMP_ORDER_ITEM (...);
Motivations:
- create a transactional staging area, isolated from the final data model;
- allow loading and validating order data (including foreign key constraints between TMP_*) before consolidation;
- keep the batch "self-contained": each order (or groups of orders) is loaded into a coherent set of temporary tables.
In Flowlyze terms, this part belongs to the PREPARE STATEMENT phase.
Writing to Temporary Tables (INSERT STATEMENT Phase)
The INSERT INTO TMP_* statements directly map the JSON to temporary tables.
Shipping and Billing Addresses
INSERT INTO TMP_ADDRESS (...) VALUES (
@shipping_address.id,
@shipping_address.street,
...
@customer__customer_id,
@order_id
);
INSERT INTO TMP_ADDRESS (...) VALUES (
@billing_address__id,
@billing_address__street,
...
@customer__customer_id,
@order_id
);
- Two sets of data (shipping and billing) are extracted from the message.
- Both addresses are linked to the same
CUSTOMER_IDandORDER_ID. - Placeholders (e.g.
@shipping_address.street) represent message fields linked to the corresponding column.
Customer
INSERT INTO TMP_CUSTOMER (...) VALUES (
@customer.customer_id,
@customer.first_name,
@customer.last_name,
@customer.email,
@customer.phone,
@customer.alt_phone,
@customer.contact_preferences.preferred_method,
@customer.contact_preferences__allow_sms,
@customer.contact_preferences__allow_calls
);
- The
customerobject is extracted and normalized into a row ofTMP_CUSTOMER. - Contact preferences are saved in dedicated columns (
PREFERRED_METHOD,ALLOW_SMS,ALLOW_CALLS).
Order Header
INSERT INTO TMP_ORDER_HEADER (...) VALUES (
@order_id,
@order_date,
@customer.customer_id,
@payment_method,
@payment_status,
@order_status,
@currency,
@shipping_cost,
@total_amount,
@notes
);
- Links the order to the customer (
CUSTOMER_ID) and contains all payment and status information.
Order Lines (Loop on Items)
- For each element in the
itemsarray, anINSERTintoTMP_ORDER_ITEMis generated. - The
{{#each items}}construct indicates an iterative template: Flowlyze generates as manyINSERTstatements as there are items in the order. - All lines are linked to the same
ORDER_ID, preserving the 1–N relationship between order and lines.
This part constitutes the INSERT STATEMENT phase of the RDBMS destination, executed for each message.
Consolidation into Final Tables (FINALIZE STATEMENT Phase)
After populating the temporary tables, the SQL executes consolidation to the final tables (CUSTOMER, ORDER_HEADER, ADDRESS, ORDER_ITEM) using INSERT ... SELECT ... ON DUPLICATE KEY UPDATE.
Example for CUSTOMER:
INSERT INTO CUSTOMER (...columns...)
SELECT
c.CUSTOMER_ID AS EXT_ID,
c.CUSTOMER_ID,
...
FROM TMP_CUSTOMER c
ON DUPLICATE KEY UPDATE
CUSTOMER_ID = VALUES(CUSTOMER_ID),
FIRST_NAME = VALUES(FIRST_NAME),
...
;
Same schema for:
ORDER_HEADER(orders table),ADDRESS(addresses linked to customer/order),ORDER_ITEM(order lines).
Key Motivations:
-
Idempotent Upsert:
- If the record does not exist →
INSERT. - If the record exists (same PK / unique key) →
UPDATE.
This allows re-running the flow on the same order without creating duplicates, aligning the database with the most recent state of the source system.
- If the record does not exist →
-
Transactional Atomicity
- All operations (writing to TMP_*, consolidation to final tables) can be executed within a single database transaction.
- If any step fails (e.g. constraint error, inconsistent data), the entire transaction can be rolled back, keeping data in a consistent state.
-
Separation between Logical Modeling and Data Source
- The JSON message has an "event-oriented" structure.
- The database has a normalized relational structure.
- Temporary tables serve as a bridge: they allow transformations, validations, enrichments, and constraint management before the final upsert.
-
Portability across RDBMS
-
The
ON DUPLICATE KEY UPDATEsyntax is typical of MySQL, but the insert or update logic is replicable on other databases:ON CONFLICT DO UPDATEin PostgreSQL,MERGE INTO ... WHEN MATCHED / NOT MATCHEDin SQL Server (MSSQL) and Oracle.
-
In Flowlyze, the PREPARE / INSERT / FINALIZE structure remains the same; only the SQL dialect used in FINALIZE changes to adapt to the RDBMS.
-
The ON DUPLICATE KEY UPDATE syntax is specific to MySQL. For other databases:
- PostgreSQL: use
ON CONFLICT DO UPDATE - SQL Server / Oracle: use
MERGE INTO ... WHEN MATCHED / NOT MATCHED
Flat File
The Flat File destination allows exporting a dataset to one or more files, according to logic completely decoupled from the file format, exactly as happens for the Flat File source.
The flow defines where to write (location) and how to format the data (format), while Flowlyze handles serialization and transfer to the destination system.
Base Parameters
The Flat File destination exposes the following main parameters:
-
Directory: Remote path where files will be created (FTP/SFTP/FTPS directory).
-
Filename: Name of the file to generate. Supports placeholders and flow variables, allowing dynamic construction of the name (e.g.
orders_{{date}}.csv). -
Location Type: Indicates the transfer channel used (e.g. SFTP, FTP…).
-
Granularity: Defines the writing granularity level, i.e. how many files will be generated. The selection sets a suffix that is inserted before the extension in the FileName
- Batch: one file per execution batch (single data flow execution)
- Daily: one file per day
- RequestId: one file per group of processed messages (parameter configured in BufferSize options)
- Single With Time: per single message,
- Single: no predefined granularity. A file is created with the FileName without adding suffixes.
-
Connection: Access parameters to the destination server:
- host
- port
- username
- password
Location Type
Currently the destination supports all FTP protocol variants:
- FTP
- SFTP
- FTPS
- Amazon S3 (TBD)
- Azure Blob Storage (TBD)
Format Type
The currently available formats are:
- CSV
- JSON
Below are the specific configurations for each.
Format Settings — CSV
Available parameters:
- Has headers: Indicates whether the file must include the column header row.
- Culture: Culture used for representing numbers and dates (e.g.
it-IT→ uses comma as decimal separator,en-US→ decimal point). - Delimiter: Column delimiter (
,,;,\t, …). - Quote: Character to enclose text fields. Use
\0to not apply any quoting.
Format Settings — JSON
Available parameters:
- Array field name: Indicates the name of the field that will contain the array. If set: the JSON is written as an object, e.g. `
{ "data": [ {...}, {...} ] }`If empty: the array is written directly in root: `[ {...}, {...} ]`
Mapping (CSV only)
The Flat File destination allows defining a column mapping for the CSV file. Through mapping it is possible to precisely describe how each field of the file will be generated.
For each column it is possible to specify:
- Column name: The name shown in the header (if
has headers = true). - Default value: Used if the field is not present in the message.
- Field type:
string,int,decimal,datetime(affects formatting and applied culture). - Field format: Examples:
#.00to format decimals with two digits,yyyy-MM-ddfor dates.
- Read format: Expected input format, useful when converting string values to dates or structured numbers during writing.
Custom Adapter
Beyond standard connectors (source and destination), Flowlyze can be extended through external Apps called Custom Adapters.
There are no constraints on the technology used: the only requirement is support for the HTTP protocol, following the rules defined by the communication protocol.
In the context of the protocol, the role of System Integrator is performed by Flowlyze, which sends HTTP requests to the Custom Adapter and processes responses.
Architecture and Integration Flows
Execution Modes
The Custom Adapter provides two operational modes:
Synchronous Mode
- Flowlyze sends data in batches (e.g. 1000 rows at a time) via API.
- The Custom Adapter processes the batch within the single HTTP call and returns a standard payload that, for each message, reports:
- processing outcome,
- any errors and metadata.
- The call is subject to timeout and limitations on maximum duration (typically 30 seconds).
This mode is suitable for fast, idempotent integrations with limited volumes per single batch.
Asynchronous Mode
- Flowlyze sends data in batches as in the synchronous case, but the Custom Adapter can process them in a deferred manner (background jobs, internal queues, etc.).
- In the initial response, the Custom Adapter signals that processing is asynchronous.
- Flowlyze performs polling on the destination system (via the
progressaction) until job completion.
This mode is ideal for heavier or slower processing, where it is not realistic to complete everything within the timeout of a single HTTP request.
The synchronous mode has a typical timeout of 30 seconds. If processing requires more time, use the asynchronous mode.
Synchronous Communication Protocol
Every external system that wants to receive data from Flowlyze via Custom Adapter must expose one or more HTTP endpoints.
All endpoints share the same basic body structure:
{
"action": "execute",
"payload": { ... }
}
action: request type. The expected values are:"execute"→ starts message processing,"progress"→ (used in asynchronous flow) requests progress status.
payload: specific content of the request, which varies based on the action.
Execution Endpoint
`POST <domain>/system-integrator/handler1 (action "execute")`
This endpoint is called by Flowlyze to deliver a batch of messages to process.
Request
{
"action": "execute",
"payload": {
"requestId": "<guid>",
"messages": [
{
"msgId": "<guid>",
"msg": {
"...": "..."
},
"meta": {
"...": "..."
}
}
],
"meta": {
"...": "..."
}
}
}
Field meanings:
payload.requestIdUnique identifier of the request (batch) sent by Flowlyze.payload.messagesArray of messages to process.payload.messages[i].msgIdUnique identifier of the single message within the batch.payload.messages[i].msgActual message payload (data record to process).payload.messages[i].metaMetadata referring to the single message (context, specific configurations, etc.).payload.meta`` Metadata shared by the entire batch (e.g. connection strings, credentials, downstream endpoints, configuration parameters).
Temporal and Outcome Constraints
- The external system must process the request within 30 seconds of sending.
- HTTP 200 Requirement:
- If the system responds with a status other than
200, the entire request is considered failed, and all messages in the batch result as unprocessed. - The retry policies configured in Flowlyze will handle resending the batch or unprocessed messages, according to configuration.
- If the system responds with a status other than
Synchronous Response
If processing is handled in synchronous mode, the Custom Adapter responds with:
{
"requestId": "<requestId>",
"isAsync": false,
"messages": [
{
"msgId": "<guid>",
"status": "success",
"date": "<date>",
"meta": { }
}
]
}
-
requestId`` Must match the one received in the request. -
isAsyncfalseindicates that processing was handled entirely within this call. -
messagesArray of outcomes for each sent message. -
messages[i].msgIdMust correspond to amsgIdpreviously sent by Flowlyze. -
messages[i].statusProcessing status of the single message:"success"→ message processed correctly,"error"→ processing failed,"in_progress"→ used only in asynchronous scenarios, when processing is not yet complete.
-
messages[i].dateEnd processing date/time in ISO8601 format. -
messages[i].metaAny metadata returned by the external system (e.g. created resource ID, error messages, summary log).
In case of processing error, messages with status = "error" will be handled by Flowlyze according to configured retry policies (retries, dead-letter, logging, etc.).
Asynchronous Communication Protocol
In asynchronous mode:
-
Flowlyze sends an
action = "execute"request identical to the synchronous case. -
The Custom Adapter:
- responds with
HTTP 200, - sets
isAsync: true, - can mark messages as
"in_progress"until processing is complete.
- responds with
-
Flowlyze makes subsequent calls with
action = "progress"to update the status of messages associated with a specificrequestId. -
The Custom Adapter returns the updated status (
success/error) for eachmsgIdprogressively.
The body structure remains aligned with the contract described previously (same requestId, same messages list with updated status).
Manifest and Endpoint Discovery
To allow Flowlyze to dynamically discover which handlers are exposed by a Custom Adapter, the external system must publish a manifest at a well-known endpoint.
Discovery Endpoint
**GET <domain>/.well-known/system-integrator.json **
Responds with a JSON of the form:
{
"version": 1,
"handlers": {
"handler1": "<domain>/system-integrator/handler1",
"handler2": "<domain>/system-integrator/handler2"
}
}
versionManifest version, used for future compatibility.handlersMap[handler name] → [handler endpoint]. Each handler represents an entry point for processing data from a different subscriber/flow. In this way, a single Custom Adapter can serve multiple Flowlyze flows with dedicated logic.handlers.<handler>URL of the endpoint to which Flowlyze sendsexecute progressrequests relative to that specific handler.
Constraints:
- The discovery endpoint MUST always be exposed exactly at
/.well-known/system-integrator.json. - URLs of other endpoints (handlers) are free, at the discretion of the external system; the example with
/system-integrator/handler1and/system-integrator/handler2represents only a recommended convention.
Libraries
See Platforms\Custom section for libraries and project templates
Null
The Null destination is a dummy destination used when a real destination system is not necessary — or not yet available.
Its only purpose is to receive data and log it, without performing any processing, transformation, or sending to external systems.
This destination is useful in various scenarios, for example:
- development or testing phase of a flow, when the real destination is not ready;
- verification of correct mapping and message serialization;
- diagnosis of anomalous behaviors upstream of the flow, isolating the problem.
Flowlyze preserves the message in internal logs, allowing confirmation of correct receipt and easy analysis of the content. No other processing is executed.
The Null destination is particularly useful during development and debugging, allowing you to verify that data arrives correctly at the destination point without having to configure an external system.