Skip to main content
Solved

What Drives The Stream Home Charts

  • May 6, 2026
  • 8 replies
  • 0 views

This message originated from Cribl Community Slack.
Click here to view the original link.

Hey all. I'm relatively new to Stream so I might be missing some fundamental knowledge here, but I'm working through some data input consistency... can someone explain why I'm seeing drops in the Stream Home charts, but not when looking at cribl_metrics and total.in_bytes? What drives the Stream Home charts? I had a similar issue where my workers showed 0 free memory and CPU load average for a 90min window (and my route sparklines were flat), yet I have logs for that time period... I need to be sure that I haven't lost data and produce an accurate picture of what happened and why it looked like stuff went offline, or significantly dipped.

Links for this message:
Screenshot 2026-05-05 at 10.30.47 AM.png
Screenshot 2026-05-05 at 10.30.59 AM.png

Best answer by Brandon McCombs

Leader relies on metrics sent by worker nodes. So was there any comms issues between nodes and leader at that time of the gap? Do the metrics on the workers also show gaps? Don't teleport but hit the API of the workers with your browser to view the same page (not available via teleport) of Monitoring. Then check the logs on some of the worker during that time to see what was happening for the worker processes.

8 replies

  • Author
  • Participating Frequently
  • May 6, 2026
And.... that dip is now gone on the Home dashboard. My dips from yesterday (mem/CPU) are still there though. I'm guessing some process backfills those charts...? It would seem today's "dip" is maybe not related to yesterday's, if the dip from yesterday is still there.

Links for this message:
image.png

Leader relies on metrics sent by worker nodes. So was there any comms issues between nodes and leader at that time of the gap? Do the metrics on the workers also show gaps? Don't teleport but hit the API of the workers with your browser to view the same page (not available via teleport) of Monitoring. Then check the logs on some of the worker during that time to see what was happening for the worker processes.

  • Author
  • Participating Frequently
  • May 6, 2026
The only non-info stuff in cribl_internal_logs around the 7:40am dip are two events- (error while flushing / request failed) around this error: Error: socket hang up at Socket.socketOnEnd (node:_http_client:542:25) at Socket.emit (node:events:530:35) at Socket.emit (node:domain:489:12) at endReadableNT (node:internal/streams/readable:1698:12) at process.processTicksAndRejections (node:internal/process/task_queues:90:21)

  • Author
  • Participating Frequently
  • May 6, 2026
followed by this about 2 mins later, not sure if related: ..."cid":"api","channel":"LDClient","level":"warn","message":"Received I/O error ([object Object]) for streaming request - will retry"}

  • Author
  • Participating Frequently
  • May 6, 2026
There aren't any gaps in cribl_metrics or cribl_internal_logs for that time range. And about an hour ago, I refreshed the Home dashboard and the dip came back.... refreshed again and it went away.

so it's possibly just a rendering issue for some reason. If you'd like for us to investigate further please open a case.

we can track it better that way

  • Author
  • Participating Frequently
  • May 6, 2026
Sounds good. It seems unrelated to the issue I found last night which I'm handling separately. Thank you Brandon!