dydoconnect
Management Summary
dydoconnect takes data from existing systems and converts it for a modern output management system. The data is clearly structured and made easy to read so that the subsequent output management system can access the data directly and consistently. Enrichments can take place during the conversion, for example, the company bar can be stored as a valid configuration for each client. The data transformation itself is configured, not programmed. The conversion significantly reduces the development time per document and no adaptation to your existing data-supplying systems is necessary.
dydoconnect is cloud-ready, but can also be operated on-premise.
Technical Overview
The dydoconnect software consists of various nodes and interfaces that control the workflow and enable data processing. The nodes can be roughly divided into three categories:
- Input adapter,
- Output adapter and
- Processors.
Input adapter nodes are used to start the workflow. They receive the data from existing systems and ensure that it is available in a format suitable for further processing steps. These nodes can, for example, receive data from a database, a file system or an external API.
Output adapter nodes, on the other hand, mark the end of the workflow. They are responsible for converting the data into a format that can be accepted by the target systems. These nodes ensure that the data can be processed consistently and directly by the downstream systems. For example, they can write data to a database, send it to an external system or transform it into a specific file structure.
In addition to input adapter and output adapter nodes, processors can be integrated into the workflow in order to change the messages between the nodes. Processor nodes are intermediate steps that can perform various transformations and manipulations on the data. For example, they can filter, transform, aggregate or link data with other data sources. These processors make it possible to adapt the data to the specific requirements of the target systems and to change it on its way through the workflow.
In summary, the workflow in dydoconnect consists of an input node that starts the workflow and receives data from existing systems, followed by a series of processor nodes that can modify the data, and finally an output node that ends the workflow and transforms the data accordingly for the target systems. Depending on individual requirements and configurations, further nodes and interfaces can be integrated into the workflow to ensure efficient data processing and transformation.
Nodes
Input
- AMQP- receives Advanced Message Queuing Protocol messages
- File system – listens for changes in the file system
- Http – processes incoming HTTP requests
- TCP – reads binary data streams from incoming TCP connections
- S3 – Can subscribe to buckets and react to new files
- WebDAV – Pulls files from a WebDAV
- SQL – Retrieves data from a database
- FTP – Pulls files from an FTP(s) server
- Imap – Pulls data from a mail server
- kafka – Pulls messages sent by Kafka-Producer
Output
- AMQP – sends AMQP messages
- File system – save files in the file system
- Http – sends HTTP requests
- WebDAV – uploads files to a WebDAV server
- M/Text – creates and prints M/Text documents
- Logger – outputs message content in the console
- Mail – sends e-mails
- S3 – Writes data to an S3 store
- SQL – Writes data to databases
- FTP – Writes data to an FTP server
- Mail (smtp) – Sends email
- kafka – Write messages on a Kafka stream
Processor
- Templating – evaluates the content and variables of a Golang template
- Transformation – transforms files into a JSON or XML format using a predefined schema
- Change attributes – changes attributes of a message
- JavaScript – enables the execution of JavaScript code
- Base64encoder – Base64 encoding/decoding
- Mapper – converts a message into a map structure
- Balancer- configures and distributes the incoming data traffic