As the title says, I’m looking to create and update a lookup table based on incoming events, somewhat similar to the outputlookup function in splunk. Is there any way to update or create a table based on fields from events passing through a pipeline?
You can use the Redis function to update Redis-based data from within a pipeline using live data.
This isnt currently possible with CSV-based lookups.
Weve created a product with custom functions that will read/write tables in data stores, which can be mongo, Oracle, MySql, MSSQL, Postgress, DB2 and others, making the data available across workers. Ping me if this aligns with what youre looking for.
You can optionally configure a script in "Sources" to run and collect the data you need from the source and output the results to /opt/cribl/data/lookups. This is handy in the instance you have a fileserver hosting reference data or an API endpoint that has the file or data ready to be consumed.
Example script for hitting endpoint and writing to lookup directory.
wget URL-to-Data -O /opt/cribl/data/lookups/filename.csv
Reply
Login to the community
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.