This message originated from Cribl Community Slack.
Click here to view the original link.
This may be a silly question, but regarding the Azure Event Hub source, let's say I have to restart my Stream workers or otherwise take them down briefly for maintenance. Once they come back up, they'll just resume the data ingestion where they left off, right? Do I need to do anything "special" to make sure it keeps track of where it needs to pick back up reading data?
Solved
Azure Event Hub Stream Workers Not Resuming Data Ingestion After Restart
Best answer by Brandon McCombs
Kafka uses an offset that is stored by brokers and updated by consumers. So when consumers resume the broker tells them the last committed offset for each partition in the topic. That may not be the last point from which they read but it's the last committed offset. So duplication can occur. This is all built into the kafka protocol.
Sign up
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
