Get started

On this page

Define a target

Target Job

In Fyrefuse, sources and targets function as dual components, meaning they are defined and managed in exactly the same way. The key difference is that sources represent the starting point of the pipeline, while targets represent the end point.

Also for the target, the right configuration drawer will automatically adjust the available options based on the technology of the selected entity/table, ensuring a fully customized setup.

Batch mode

In batch mode, Spark writes the entire processed dataset in a single operation, making it suitable for ETL/ELT workflows, analytics, and bulk data updates.

Stream mode

With Structured Streaming enabled, Spark continuously writes incoming data to the target, ensuring low-latency updates for real-time processing.

The same declarative approach used for batch targets applies to streaming targets, simplifying development and deployment.