Thank you,
Is there any way to just configure it via pipelines and without destinations?
The best way for us is to deploy it without and restart of the workers.
You could configure two (or more) routes with the same filter. The first routes should have the final flag disabled. The last route should have the flag set to final. This way you can copy the same data stream to multiple destinations without having to make changes to the destination frequently.
A trick here is to also include ... && Math.random() < 0.1
to your route filter to sample the number of events matching and being forwarded to the destination. You could also look into the Sample, Dynamic Sampling, Suppression, and Drop functions if you wanted to reduce the volume in the pipeline.
Thank you for your thoughts and inputs.
Well test the last one from @bdalpe which seems to be the best way for us.
If you are looking to have data come into Cribl and then have the option to send it out in multiple formats to various destinations, you are most likely looking for a Post processing pipeline. That way you can work on the data in a consistent way and then format for each downstream system appropriately.
docs.cribl.io
How do Pipelines work, settings and types