One of the neatest features of the product is the live preview. This works great for relatively small events.
If I have a massive event with a pretty big intensive pipeline to boot, things time out and it really is a PITA to debug my code. How do people work through such a work flow?
I usually increase the timeout from 10 seconds to 60 seconds and "play" the pipeline, but I wanted to know if people had some other nifty hacks.
Thank you! b1scu1t
Best answer by Jon Rust
I have a pipeline here that might help you out. The idea is to use aggregation functions before and after your pipeline fires to get volume and count metrics.
From the README linked above:
Create a Collector source pointing to an NFS share or object store with your sample file
Alternatively, you can use a raw TCP source, and feed the file to the port using netcat
Be sure you have an Event Breaker assigned to the source that works for your test data
Add this Pipeline to your Worker Group
Change the Chain function to point to the Pipeline or Pack you want to test
Under Routing, Quick Connect, connect your source to the devnull destination, and select the cribl_inline_redux_report Pipeline
Optionally deliver to your analytics tool
Navigate back to the source page and prepare to make a Full Run on your collector
Or prepare to fire off netcat with your file
In a new window or tab, start a capture with your source as the filter
To help with the capture filter, you might add a field to the Collector definition, eg _TEST_FIELD => 1, and filter on that
With the capture running, go back to the tab with the Collector source and run it (or fire off your netcat)
The capture should show you 2 events: 1 for the original stats, and 1 for the processed stats
I have a pipeline here that might help you out. The idea is to use aggregation functions before and after your pipeline fires to get volume and count metrics.
From the README linked above:
Create a Collector source pointing to an NFS share or object store with your sample file
Alternatively, you can use a raw TCP source, and feed the file to the port using netcat
Be sure you have an Event Breaker assigned to the source that works for your test data
Add this Pipeline to your Worker Group
Change the Chain function to point to the Pipeline or Pack you want to test
Under Routing, Quick Connect, connect your source to the devnull destination, and select the cribl_inline_redux_report Pipeline
Optionally deliver to your analytics tool
Navigate back to the source page and prepare to make a Full Run on your collector
Or prepare to fire off netcat with your file
In a new window or tab, start a capture with your source as the filter
To help with the capture filter, you might add a field to the Collector definition, eg _TEST_FIELD => 1, and filter on that
With the capture running, go back to the tab with the Collector source and run it (or fire off your netcat)
The capture should show you 2 events: 1 for the original stats, and 1 for the processed stats