Skip to main content
Solved

Azure Event Hub Stream Workers Not Resuming Data Ingestion After Restart

  • March 19, 2026
  • 2 replies
  • 3 views

This message originated from Cribl Community Slack.
Click here to view the original link.

This may be a silly question, but regarding the Azure Event Hub source, let's say I have to restart my Stream workers or otherwise take them down briefly for maintenance. Once they come back up, they'll just resume the data ingestion where they left off, right? Do I need to do anything "special" to make sure it keeps track of where it needs to pick back up reading data?

Best answer by Brandon McCombs

Kafka uses an offset that is stored by brokers and updated by consumers. So when consumers resume the broker tells them the last committed offset for each partition in the topic. That may not be the last point from which they read but it's the last committed offset. So duplication can occur. This is all built into the kafka protocol.

2 replies

Kafka uses an offset that is stored by brokers and updated by consumers. So when consumers resume the broker tells them the last committed offset for each partition in the topic. That may not be the last point from which they read but it's the last committed offset. So duplication can occur. This is all built into the kafka protocol.

  • Author
  • New Participant
  • March 19, 2026
ah, ok, makes sense and sounds good. That works for me. :slightly_smiling_face: Thank you for the info