This message originated from Cribl Community Slack.
Click here to view the original link.
We set up Cribl TCP destination to send logs to a Cribl TCP source in another Worker Group. We are sending a high-volume source (~1TB/day) and using Always On source persistent queuing. The Cribl TCP destination does not have persistent queuing enabled and is set to drop events when backpressure signals are received from the destination (the Cribl TCP source) as we felt that using PQ on both sides would be redundant and we don't want to place any additional load on the sending Worker Group. However, we are seeing events being dropped at the destination with this configuration, and it looks like the Cribl TCP source is constantly exerting backpressure, even in Always On mode with plenty of disk available to queue logs. Any ideas why this would happen? I would expect the queue disk to fill up for a Worker Process before the source would excerpt backpressure. I read this (https://docs.cribl.io/stream/persistent-queues-sources#report-to-sender) and it seems like it could be related to the "Buffer size limit" setting, which is 1000 by default, but I even set this to 1,000,000 and it was still happening. Cribl TCP source PQ settings:
Links for this message:
image.png
Solved
Any Ideas Why This Would Happen
Best answer by matthew.filler985
Yeah - I want to say its an "undocumented" feature that high-volume datasets should be using cribl_http. Furthermore you can tweak/tune those HTTPS settings too for your environment.
Sign up
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
