I have verified that I can list the directories using the AWS CLI with the following command:
aws s3 ls s3://cisco-managed-us-east-2/my_directory_prefix/
I can pull the logs to my local directory using the following command:
aws s3 sync s3://cisco-managed-us-east-2/my_directory_prefix/dnslogs/ C:\umbrella_logs\
However, I am unable to pull them using the Umbrella S3 Collector in Cribl Stream locally hosted. Any ideas on what I could do differently?
Here are my settings for the Cribl S3 collector with the personalized info redacted:
{
"type": "collection",
"ttl": "4h",
"removeFields": [],
"resumeOnBoot": false,
"schedule": {
"cronSchedule": "*/5 * * * *",
"maxConcurrentRuns": 1,
"skippable": true,
"run": {
"rescheduleDroppedTasks": true,
"maxTaskReschedule": 1,
"logLevel": "info",
"jobTimeout": "0",
"mode": "run",
"timeRangeType": "relative",
"timestampTimezone": "UTC",
"timeWarning": {},
"expression": "true",
"minTaskSize": "1MB",
"maxTaskSize": "10MB"
},
"enabled": false
},
"streamtags": [],
"workerAffinity": false,
"collector": {
"conf": {
"parquetChunkSizeMB": 5,
"parquetChunkDownloadTimeout": 600,
"partitioningScheme": "none",
"awsAuthenticationMethod": "manual",
"signatureVersion": "v4",
"enableAssumeRole": false,
"durationSeconds": 3600,
"maxBatchSize": 10,
"recurse": false,
"reuseConnections": false,
"rejectUnauthorized": true,
"verifyPermissions": true,
"disableTimeFilter": true,
"bucket": "'cisco-managed-us-east-2'",
"awsApiKey": "'my_aws_api_key'",
"awsSecretKey": "'my_aws_secret_key'",
"region": "us-east-2",
"path": "'my_directory_prefix/dnslogs/'",
"extractors": []
},
"destructive": false,
"encoding": "utf8",
"type": "s3"
},
"input": {
"type": "collection",
"staleChannelFlushMs": 10000,
"sendToRoutes": true,
"preprocess": {
"disabled": true
},
"throttleRatePerSec": "0"
},
"savedState": {},
"description": "Logs from Cisco Umbrella",
"id": "Umbrella_logs"
}
Variations I have tried:
I have tried both 'cisco-managed' and cisco-managed-us-east-2' as the S3 Bucket both with and without single quotes.
I have tried the Access and Secret keys both with and without single quotes and double quotes. I have rotated the keys a few times to make sure it wasn't a specific key causing issues.
Since this is a Cisco Managed bucket I have no role set within AWS but for testing I have treid it with Assume Role both on and off.
I have treid the path both with and without single and double quotes. I have tried it with and without the additional path /dnslogs/ I have tried it with and without a trailing slash.
I have tried it with various versions of the bucket in the endpoint field and with the endpoint field blank.
I have tried both v2 and v4 for the signature version and v2 causes an error.
I have tried reuse connections both on and off. I have tried Reject Unauthorized Certificates both on and off. I have tried verify bucket permissions both on and off.
With verify bucket permissions turned on I get the following error:
time: 2025-02-28T14:53:26.443Z
cid: w1
channel: task:discover
host: cribl01
source: /opt/cribl/state/jobs/default/1740754406.10.adhoc.Umbrella_logs/logs/task/tasklog.log
level: error
message: failed to load collect function
ioName: s3
ioType: collector
jobId: 1740754406.10.adhoc.Umbrella_logs
{}
reason:
message: S3 bucket 'cisco-managed-us-east-2' error: Forbidden message: null
bucket: cisco-managed-us-east-2
code: Forbidden
hint: Make sure access key is not inactive, empty, or incorrect; check resource and trust policies for your role; and make sure permission boundaries are set correctly. Also, ensure your permissions align with your bucket versioning settings.Show less
name: S3Error
stack: Error
at new n (/opt/cribl/bin/cribl.js:15:112759)
at new a (/opt/cribl/bin/cribl.js:15:113217)
at new j (/opt/cribl/bin/cribl.js:15:13577295)
at Q.init (/opt/cribl/bin/cribl.js:15:...Show more
taskId: discover
type: s3
With Verify bucket permissions turned off I do not get an error. It just says no logs were found.
Relevant links:
https://docs.cribl.io/stream/collectors-s3/
https://docs.umbrella.com/umbrella-user-guide/docs/enable-logging-to-a-cisco-managed-s3-bucket