Guides, Documentation and Support Articles
This document provides configuration examples and reference links for setting up third-party load balancers with Cribl Stream syslog deployments. Load balancers are essential for distributing syslog traffic across multiple Cribl Stream Worker Nodes, as syslog senders have no built-in load balancing capabilities. PrerequisitesBefore implementing these configurations, ensure you have:Multiple Cribl Stream Worker Nodes deployed Network connectivity between your load balancer and Worker Nodes Understanding of your syslog traffic patterns (UDP vs TCP, volume, etc.) Appropriate firewall rules and security policies in placeFor architectural guidance and best practices, see the main Cribl Stream Syslog Use Case Guide. Load Balancer RequirementsWhen configuring load balancers for syslog with Cribl Stream:For syslog traffic: Can use Application Load Balancer or Network Load Balancer Health checks: Ensure proper health check configuration for Worker Node availability Protocol support: Must support both UDP and TCP as needed for your syslog sources Configuration Examples F5 BIG-IP ExampleThis configuration creates a UDP virtual server and pool for load balancing syslog traffic across multiple Cribl Stream Workers:ltm virtual udpsyslog_514_vs { destination 10.10.10.10:514 ip-protocol udp mask 255.255.255.255 pool udpsyslog_514_pool profiles { udp { } } Vlans-disabled }ltm pool udpsyslog_514_pool { members { 10.10.20.10:514 { address 10.10.20.10 session monitor-enabled state up } 10.10.20.20:514 { address 10.10.20.20 session monitor-enabled state up } } monitor tcp service-down-action reset }Key Configuration Notes:Replace IP addresses with your actual Worker Node IPs Adjust pool members based on your number of Worker Nodes Consider separate configurations for TCP and UDP if both are needed  Citrix Netscaler ExampleReference: Load Balancing syslog ServersThis configuration shows Netscaler acting as a load balancer to distribute incoming syslog traffic from various network devices across multiple Cribl Stream Worker Nodes. The configuration creates separate virtual servers for UDP and TCP protocols, ensuring proper load distribution and high availability for syslog ingestion.This configuration creates separate load balancing virtual servers for UDP and TCP syslog traffic, distributing connections across multiple Cribl Stream Worker Nodes:# Define services for Cribl Stream Worker Nodes# UDP Servicesadd service worker1-udp 192.0.2.10 SYSLOGUDP 514add service worker2-udp 192.0.2.11 SYSLOGUDP 514add service worker3-udp 192.0.2.12 SYSLOGUDP 514# TCP Servicesadd service worker1-tcp 192.0.2.10 SYSLOGTCP 514add service worker2-tcp 192.0.2.11 SYSLOGTCP 514add service worker3-tcp 192.0.2.12 SYSLOGTCP 514# Create virtual servers for load balancingadd lb vserver syslog-udp-lb SYSLOGUDP 0.0.0.0 514 -lbMethod ROUNDROBINadd lb vserver syslog-tcp-lb SYSLOGTCP 0.0.0.0 514 -lbMethod ROUNDROBIN# Bind UDP services to UDP virtual serverbind lb vserver syslog-udp-lb worker1-udpbind lb vserver syslog-udp-lb worker2-udpbind lb vserver syslog-udp-lb worker3-udp# Bind TCP services to TCP virtual serverbind lb vserver syslog-tcp-lb worker1-tcpbind lb vserver syslog-tcp-lb worker2-tcpbind lb vserver syslog-tcp-lb worker3-tcpKey Configuration Notes: Replace IP addresses (192.0.2.10, 192.0.2.11, 192.0.2.12) with your actual Cribl Stream Worker Node IPs This example shows 3 Worker Nodes; adjust the number of services based on your deployment Uses ROUNDROBIN load balancing method; consider LEASTCONNECTION for TCP traffic Creates separate virtual servers for UDP and TCP to ensure proper traffic handling Virtual servers listen on all interfaces (0.0.0.0); specify your load balancer IP if neededAlternative Configuration for Hash-Based Load Balancing:If you need consistent routing of syslog messages from the same source to the same Worker Node:# Use source IP hash for consistent routingadd lb vserver syslog-udp-lb SYSLOGUDP 0.0.0.0 514 -lbMethod SOURCEIPHASHadd lb vserver syslog-tcp-lb SYSLOGTCP 0.0.0.0 514 -lbMethod SOURCEIPHASH AWS Network Load Balancer (NLB) GuideFor AWS deployments, Network Load Balancer provides the best performance for syslog traffic.Reference: UDP Load Balancing for Network Load Balancer Key Considerations:Use NLB for both UDP and TCP syslog traffic Configure appropriate health checks for Worker Nodes Consider cross-zone load balancing for high availability Ensure security groups allow syslog traffic on configured ports NGINX Plus GuideNGINX Plus provides robust TCP and UDP load balancing capabilities suitable for syslog deployments.Reference: TCP and UDP Load Balancing Key Features:Health checks for upstream servers Session persistence options SSL/TLS termination capabilities Real-time monitoring and statistics HAProxy (Including TCP Break-Apart)HAProxy offers excellent syslog load balancing capabilities with ring buffer support for high-performance scenarios.Reference: haproxy speaks syslog
Acknowledgements: Many thanks to TomĂĄs GarcĂa Hidalgo and Roberto Moreda for their creation and contribution of this collector!Salesforce provides an API endpoint to execute a SOQL query to get records of the requested object.Salesforce defines standard objects, such as LoginHistory or SetupAuditTrail, whose records are useful in a SIEM pipeline. EventLogFile is a special case because its records contain data about event log files ready to be downloaded from a different endpoint.This Cribl REST Collector enables a compact way to get:Salesforce records using the Query endpoint Salesforce event monitoring content from EventLogFile records using the sObject Blob Get endpointFind the collector here. Usage InstructionsImport the Event Breaker Ruleset: Go to Processing -> Knowledge -> Event Breaker Rules -> Add Ruleset. Click on Manage as JSON and paste the content of breaker.json. Import the REST Collector: Go to Data -> Sources -> Collectors -> REST -> Add Collector. Go to Configure as JSON tab and click on Import. Import the collector.json file. Provide the required values: domain of the customer, API version, user name, password, client ID and client secret. (Optional) Edit the queries on object records in the format discovery result code of the collector to suit your needs. Commit and Deploy.How it Works The basic steps follow the usual workflow in Cribl collectors:Discover the event log files to be downloaded using a HTTP request to get EventLogFile records. Add a static list of discover results for records of other objects that don't require subsequent downloads using format discover result. Collect event log files and records using the appropriate URL and event breaker rules.All files included are meant to be adapted for your specific case. Depending on your needs you can decide to create a collector just for records and just for event log files. They are based on an actual deployment and they were developed by the teams of Repsol and Allenta. Get Started Find the latest instructions and information on the official Cribl GitHub page for the collector.Â
This guide will walk through configuring Cribl Stream in FIPS mode when running on RHEL9 by leveraging the OpenSSL3 FIPS provider that is included when RHEL9 is running in FIPS mode.*** Prior to performing the below steps, please ensure that RHEL9 is running in FIPS mode. ***Configure the Leader Node As of version 4.7, RBAC is required to run Stream in FIPS mode. Before starting the Leader for the first time, the license needs to be applied and âmode-masterâ needs to be configured. In the example below, all commands are run as the 'root' user.Ensure that git (version 1.8.3.1 or higher) is installed on the host:[root@rhel9-stig-leader-1 ec2-user]# git -v git version 2.43.5  If using fapolicy, follow the fapolicy configuration guide to trust the git core binary and create the rules policy.  Create the Cribl user:adduser cribl  Change directory to /opt/ and then download and uncompress the Cribl binary. This example uses x64; modify as needed based on your specific architecture (see the download page for options):cd /opt/curl -Lso - $(curl https://cdn.cribl.io/dl/latest-x64) | tar zxv  Use ONE of the below methods to create the licenses.yml file and add the license to the $CRIBL_HOME/local/cribl/ directory:OPTION 1: Set an environment variable $CRIBL_LICENSE that contains the value of the license key. Ensure that $CRIBL_HOME is also set and do:if [ ! -e $CRIBL_HOME/local/cribl/licenses.yml ]; \ then mkdir -p $CRIBL_HOME/local/cribl; \ echo -e "licenses:\n- $CRIBL_LICENSE" > $CRIBL_HOME/local/cribl/licenses.yml; fi OPTION 2: Create a licenses.yml file based on this doc.  Generate the FIPS configuration file using the directory /etc/pki/tls and then modify the generated file by replacing fipsmodule.cnf with fips_local.cnf./opt/cribl/bin/cribl generateFipsConf -d /etc/pki/tls sed -i 's/fipsmodule.cnf/fips_local.cnf/g' $CRIBL_HOME/state/nodejs.cnf  Configure Stream to start as a Leader in Distributed mode:/opt/cribl/bin/cribl mode-master  Change ownership for the cribl user to own the /opt/cribl directory:chown -R cribl:cribl /opt/cribl  Create an override.conf file that specifies the necessary FIPS environment variables. Run this command first and get the OPENSSLDIR value:openssl version -a Set the OPENSSL_MODULES value below to the MODULESDIR from the command above:mkdir -p /etc/systemd/system/cribl.service.d/ cat <<EOL > /etc/systemd/system/cribl.service.d/override.conf # Custom configurations for the service file [Service] Environment="OPENSSL_MODULES=<YOUR_VALUE_HERE>" Environment="OPENSSL_CONF=/opt/cribl/state/nodejs.cnf" Environment="CRIBL_FIPS=1" EOL  Enable Cribl to be managed by systemd:/opt/cribl/bin/cribl boot-start enable -u cribl  Start the Cribl service:systemctl start cribl Bootstrapping a WorkerCreate a NodeJS configuration file at $CRIBL_HOME/state/nodejs.cnf. Copy one from the Leader. For STIGed RHEL 9, it will look like this:mkdir -p /opt/cribl/state/ cat <<EOL > /opt/cribl/state/nodejs.cnf nodejs_conf = nodejs_init .include /etc/pki/tls/fips_local.cnf [nodejs_init] providers = provider_sect # this tells nodejs to enable fips at startup alg_section = algorithm_sect [provider_sect] default = default_sect # The fips section name should match the section name inside the # included fips_local.cnf. fips = fips_sect [default_sect] activate = 1 [algorithm_sect] default_properties = fips=yes EOL  Create an override.conf file:mkdir -p /etc/systemd/system/cribl.service.d/ cat <<EOL > /etc/systemd/system/cribl.service.d/override.conf # Custom configurations for the service file [Service] Environment="OPENSSL_MODULES=/usr/lib64/ossl-modules" Environment="OPENSSL_CONF=/opt/cribl/state/nodejs.cnf" Environment="CRIBL_FIPS=1" EOL  Run the bootstrap command that was generated from the Leader:curl 'http://rhel9-stig-leader-1:9000/init/install-worker.sh?group=default&token=XXXXXXuser=cribl&user_group=cribl&install_dir=%2Fopt%2Fcribl' | bash -
This article explains how to enable Cribl Packs to work with GitHub CI/CD. By storing your Pack in a Git repository, you can take advantage of version control and CI/CD while developing and collaborating. Multiple collaborators can pull changes into their own branches, improve the Pack, and then push their work to be merged into the production branch. The production Cribl Leader will be upgrading the pack from the production branch. Initial Setup Before we can upload the Pack to GitHub, in the GitHub UI, create a new private repository. In the upper-right corner of any GitHub page, locate the "+" icon and click on it. From the dropdown menu, select "New repositoryâ, then select "Private" repository. Enter the repo name, e.g. https://github.com/rdagan-cribl/my-fortigate-pack.git .ââââââCreate a private repository in GitHubââ Create a new GitHub PERSONAL_ACCESS_TOKEN Under your GitHub user profile, go to âSettingsâ -> âDeveloper Settingsâ -> âPersonal access tokensâ -> Tokens (classic). Make sure you Copy the value of PERSONAL_ACCESS_TOKEN to your notepad.Create a new GitHub PERSOAL_ACCESS_TOKEN Copy the Initial Pack to GitHub using The GitHub UI Download the existing Pack from the Cribl Pack Dispensary or from the Cribl Leader UI to a your local system and untar the Pack. For example:/Users/rdagan/Downloads/FortigatePack % tar xvf cribl-fortinet-fortigate-firewall.crblIn the GitHub UI, go to your Repository -> Add Files -> Upload Files. Drag all the files **EXCLUDING THE .CRBL FILE** to the repo and commit changes.Select â uploading an existing fileâ Not including the .crbl file, Drag All the rest of the files to GitHub Repo -> Commit changesDrag these four files and directories from your laptop finder to GitHub UIWe can now see all of the Pack content successfully uploaded to Git -> ' main' branchSetup CI/CD in GitHub and the Cribl Leader UI Next, create a dev branch for the repository. In the Code tab, click New Branch and enter the branch name as âdevâ. After this step, we will have two copies of our Pack configurations: one in âmain' and another copy in the 'devâ branch.We will use the âdevâ branch to develop our PackNext, in the Cribl Leader UI, import the Pack from the Git âdevâ branch into the Cribl Leader. Enter the URL (for example: https://rdagan-cribl:%3CPERSONAL_ACCESS_TOKEN%3E@github.com/rdagan-cribl/my-fortigate-pack.git). And use the âdevâ branch.Input the Git URL and use the correct BranchIn the Cribl Leader UI, add a new Function, Pipeline, or Route to your Pack. Then export the Pack back to GitHub with â Publish to Gitâ. This will push your changes to the âDevâ branch.use the âPublish to Gitâ optionAnd in the GitHub UI, merge the Dev updates Into Main. Click the Compare & Pull Request button -> Merge pull request -> Confirm merge -> Create a merge commit is selected.Merge âdevâ into âmainâ branch  Pull the Main Pack Changes into Production On the Cribl Production Leader, we are going to Pull the pack from the âmainâ branch. Note: the pack will not be upgraded unless the Pack Version has been modified. In this example, the version was upgraded from 0.6.1 to 0.6.2. In the Cribl Leader UI, select Packs -> Actions -> Upgrade -> Upgrade from Git. Input âmainâ as the branch.Select the Upgrade option, and select the âmainâ branch to upgrade fromSummary The above example shows how to enable Cribl Packs to work with GitHub CI/CD. Now you can collaborate with others and have your Pack configurations stored in your repository!
IntroductionOne of the main advantages of the Cribl platform is its ability to route data to multiple destinations effortlessly; adding a new system can be as straightforward as specifying a Destination, processing Pipeline, and Route within Cribl Stream. The Elastic Stack emerges as a particularly promising backend for integration into this setup. Built on Elasticsearch, it acts as a comprehensive search engine capable of locating specific security eventsâsuch as finding a needle in a haystackâwith sub-second latency, regardless of dataset size. The traditional method of incorporating a new vendor such as Elastic into an existing environment often requires deploying additional agents and data collectors. This can introduce complexity and potential disruptions that can be both resource-intensive and impact existing operations and system stability. We are going to describe the easiest way to send your Cribl Stream or Edge data to Elastic: Elasticâs own Cribl integration. Elasticsearch Data Ingest 101Before jumping into describing how Cribl Stream and Elasticsearch can work together, letâs quickly recap how Elastic integrations work. If you are familiar with the Elastic stack, feel free to skip to the next section.How the Elastic Integrations WorkElastic Integrations are built to simplify data collection and ingestion into Elasticsearch by providing pre-configured setups of different Elastic stack components. Various operational data types from hosts, applications, containers, and more are gathered by lightweight log shippers such as Elastic Agents and APM agents. All data types are sent to Elasticsearch, where they are organized into data streams: append-only, high-velocity time-series indices optimized for rapid ingestion. To maintain consistency across sources, data is mapped to the Elastic Common Schema (ECS), offering a standardized format that makes analysis and correlation easier.Once in Elasticsearch, data passes through ingest pipelines, which are predefined sequences of processors that transform or enrich incoming documents before indexing. These pipelines handle tasks like parsing JSON fields, appending metadata, converting date formats, and more. Each integration sends data to its own data stream, which has an assigned default ingest pipeline. This pipeline processes all data parsing, converting data into ECS before it is indexed and stored in Elasticsearch. To use Elastic integrations, data will be sent from Cribl Edge or Stream to Elasticsearch, and the integrations activated through their predefined ingest pipelines. These pipelines convert the data from Cribl into the Elastic Common Schema and ensure it enters the correct data streams. As a result, all the prebuilt features in Elasticsearch and Kibana, such as dashboards, UIs, alerts, and ML jobs, will function seamlessly. Elastic Cribl IntegrationThe Elastic Cribl integration lets you ingest logs from Cribl Stream or Edge into Elastic, enabling prebuilt integrations, dashboards and alerts. What is the Elastic Cribl Integration?The Elastic Cribl integration is a connector that allows users to ingest logs and metrics from Cribl into Elastic using Elasticâs Fleet integration data streams. This enables organizations to leverage the Elastic Common Schema (ECS) for unified data analysis, dashboards, and alerting. What Does It Provide?The Elastic Cribl integration enables ingestion of logs and metrics from Cribl into Elastic, allowing users to leverage ready-to-use dashboards and alerting features within Kibana. By mapping data to the Elastic Common Schema (ECS), the integration ensures consistency and simplifies analysis across datasets. This integration does not require the deployment of an Elastic Agent; data can be ingested directly, provided a policy with the necessary integration is configured. The integration is available in the Elastic Integrations catalogue within Kibana.The Cribl integration, officially supported by Elastic, is available to users with a Basic subscription tier. This ensures accessible support and functionality without requiring a premium plan. How Does Elastic Cribl Integration Work Under the Hood?The Elastic Cribl integration works by having Cribl send logs and metrics to Elastic through either the Elastic Cloud or Elasticsearch output. The integration uses the ââ _dataIdâ field to route incoming data to the configured data stream within Elastic, while also tagging each event so that the Cribl integration dashboard can display both the type and volume of data ingested. Once received, the data flows into pre-configured Fleet integration data streams, where it is indexed using ECS mappings, ingest pipelines, and dashboards. Access and permissions for data ingestion are managed using API keys, ensuring secure and controlled data flow throughout the integration.This setup allows organizations to centralize observability and security data from Cribl into Elastic for advanced analytics and monitoring. Installation and UseInstall Integration Assets in Kibana: Go to Management > Integrations in Kibana. Search for the Cribl integration and install it to load index templates, ingest pipelines, and dashboards. There is no need to add it to a new or an existing agent; this integration works in Elasticsearch alone. Configure Cribl Source: In Cribl Stream or Edge, set the ââ _dataIdâ field to specify the data source. See Criblâs Data Onboarding documentation for more details. Map Data IDs in Kibana: Map each ââ _dataIdâ from Cribl to the corresponding Fleet integration data stream in Kibana. Set Up Elastic Destination in Cribl: Choose between Elastic Cloud output (for cloud) or Elasticsearch output (for self-managed). Set the Cloud ID (for cloud) or Bulk API URLs (for self-managed) to point to your Elastic cluster. Set the Index or Data Stream to ââ logs-cribl-defaultâ (logs) or ââ metrics-cribl-defaultâ (metrics). Provide an Elasticsearch API key with at least "auto_configure" and "write" permissions for the relevant indices.  Example: Onboarding Palo Alto NextGen Firewall Syslog Data to ElasticI am going to use my Docker-Compose setup with Cribl Stream and Elastic stack, but you can use whatever setup you want. My GitHub repo is available here.Letâs start up the stack. Once it is up and running, Kibana will be available on https://localhost:5601 (elastic/cribldemo) and CriblStream Admin UI at http://localhost:9000 (admin/cribldemo). Configure the Elastic SideLetâs jump into Kibana and install the Palo Alto NextGen firewall integration. We will not add it to an agent because our setup doesnât require agents. The data will be sent from Cribl Stream directly into Elasticsearch. Search and select Palo Alto Next-Gen Firewall integration from the Integrations menu under Manage. Click âAdd Palo Alto Next-Gen Firewallâ. Click âSave and continueâ at the bottom without changing anything.  Click âAdd Elastic Agent laterâ. Remember, we are going to be using Cribl Stream data directly.  Our PAN Firewall integration has been installed.Letâs install the Elastic Cribl integration. Search and select Cribl in the integrations list. Click on âAdd Criblâ.  While we are at it, letâs configure the _dataId field to map the value 'pan' to logs-panw.panos datastream. This will activate the right Elastic integration for every message Stream sends with the value _dataId = 'pan'. Click âSave and continueâ.  Click âAdd Elastic Agent laterâ. This integration doesnât need an Agent.  Our Cribl integration has been installed.  Configure the Cribl SideGo to QuickConnect. As a source, letâs configure a datagen that uses Palo Alto NextGen Firewall TRAFFIC data as our source. This source will set the _dataId field to the value âpanâ. Add a new source.  Select Datagen.  Configure it with palo-alto-traffic.log, leaving the rest as defaults, and give it a name.  Under Fields, add a _dataId field and give it a value of âpanâ. It can be anything, but it will have to match the setting we have configured in Kibana for that data stream. Hit Save when done.   Letâs create an Elasticsearch Destination where the data will be sent. Note the index we are sending the data to: logs-cribl-default. This will not be the data stream where the data will land. It will be overwritten by Cribl integration and the mapping we configured between the data stream and the value in the _dataId field.In the QuickConnect screen, click on âAdd Destinationâ. Select the Elasticsearch tile.  Use the following values to configure the Elasticsearch destination: Name: es01 Bulk API URLs: https://es01:9200/_bulk Index or Data stream: 'logs-cribl-default' Authentication method: Manual Username: elastic Password: cribldemo  Under Advanced Settings, disable the certificate validation. Donât do it in production; this is just to get the demo going. Click Save when done.  Saving the dialogue will close the form. Open it again and test the connectivity to Elasticsearch using the Test tab. Click on Run Test. It should give you a green Success banner at the bottom. Close the dialogue when done.  There is one more thing. Letâs temporarily route our traffic to a devnull destination. Click and drag the â+â sign next to the datagen source and drop it at the devnull destination. Accept the passthrough option. This is what it will look like:  This will allow us to capture and inspect the traffic. Hover over the datagen source and select the Capture option.  This will show us the data generated by the source. This awesome feature of Cribl allows you to see your data in all its glory as it flows through your data pipelines!  We need to remove the source field from our events. Otherwise, Elasticsearch will reject the data because the field named âsourceâ already exists in the mapping and will result in a mapping conflict. The payload will be sent inside the _raw field, so it should work without this field. Letâs create a simple Stream pipeline with one Eval function that removes this field. Select Processing -> Pipelines from the menu at the top.  Click Add pipeline -> Create Pipeline.  Give it a name and click Save.  Click Add Function and select/type Eval:  At the bottom of the Eval Function, specify that the field source should be removed. Click Save.  Go back to QuickConnect (in the menu, select Routing -> QuickConnect) and add this Pipeline as a processing Pipeline for the data that flows from our Palo Alto traffic datagen source to our Elasticsearch Destination. Click on the link we added for the data to go to the devnull destination, and disconnect it from devnull. Now, drag the â+â sign to connect to the Elasticsearch destination. Select the Pipeline option and choose the pipeline that we just created. Hit Save.  This is what the complete result is going to look like. If itâs your first Pipeline in Cribl, congratulations!  Inspect the Data in KibanaLetâs jump into Kibana and rewind the time since our datagen sends out predefined events timestamped in the past, and watch the out-of-the-box Kibana dashboards light up with the data sent by Cribl Stream.In Kibana, select the Dashboards menu. Select [Logs PANW] Network flows dashboard:  You may need to rewind the time a few years in the time picker to see some data landing in Elastic. I selected the last 10 years to be sure. And, voila! Cribl sends PAN Traffic data to Elastic using Cribl and PAN integrations, and the data lands there as ECS, activating all the dashboards!  ConclusionIntegrating Cribl with Elastic helps avoid the complexity of installing additional data collection systems alongside the existing setup. This robust integration simplifies data management, improves operational efficiency, and unlocks deeper insights from your data. You can easily connect Elasticsearch to your current Cribl data pipelines. This means that to try out another tool, such as the Elastic Stack, there's no need to reconfigure your existing data ingestion architecture or make disruptive changes. Cribl functions as a versatile data routing and processing layer, enabling you to effortlessly direct your transformed and enriched data to Elasticsearch for indexing and analysis.Â
This article intends to provide a brief overview of permissions in Cribl.Cloud. The full doc covering permissions can be found here. Key Guiding Rules For Cribl.Cloud, there are two key rules to help with thinking about permissions:1. Permissions are purely additive. If there are multiple reasons someone should have a certain level of permissions for something, they get the most permissive.Additionally, there isnât a way you can use negative permissions or exclusions such as âGive read access to all, except member of team Aâ.2. Permissions are inherited. Permissions from Organizations go to Workspaces which go to products and so on.The third bonus rule/tip is something that takes some thinking about:3. Read-Only is not the lowest level of permissions, User is the lowest. User level of permissions means that a member is an âAddressable entityâ that can be granted complimentary permissions on top.For example, if a member is granted Stream User permissions, theyâd then have to have individual worker groups shared with them to be able to see them in the UI.With those key rules in mind, the best advice for permissions is to start small with the fewest layers of permissions and add extra ones as required via team memberships.Diagram for Cribl.Cloud PermissionsCribl.Cloud Permissions diagramExplanation and NotesThe highest level grouping is that of an Organization. Organizations are attached to AWS regions and each organization has a unique random ID (Org-ID).Organizations each âcontainâ the Cribl product suite. Behind the scenes, these map to a specific AWS account. Within an organization, people (members) can be granted User, Admin or Owner permissions.âOrganizational Admins and Organizational owners can make significant changes to an organization. For example, enabling/modifying SSO/changing workspaces, so these permission levels should be used rarely and access carefully controlled.The next layer down is a Workspace. Each workspace offers a multi-tenancy capability with higher isolation. This makes it a good fit for different business units, subsidiaries or customers in a managed service. At a Workspace layer, people can be granted member, Admin or Owner permissions.âAgain, Workspace Owners have a high level of permissions. This permission level should be carefully used and access controlled.Within workspaces, members can be granted Product level permissions. These can be User, Read only, Editor and Admin.Note that âUserâ doesnât give access to any Cribl Stream worker groups, these would have to be added additionally.Finally, within each product, there are Resource level sharing options. These include Stream Projects or Search Resources. You can be very fine grained if youâd like!ConclusionThis overview should give you a good introduction to Cribl.Cloud permissions. Please take a look at the docs if youâd like to learn more about this topic or reach out with any questions.
Cribl has the fantastic ability to encrypt data at the field level as it passes through. To do this we often use the Mask function with the replace value referencing C.Crypto: I had previously created a key in Group Settings â Security â Encryption Keys AND copied the key value for safe keeping. You only have 1 chance to do this! I reference the key ID in the above C.Crypto call - B1QQsz Iâm applying the encryption to the entire secret_field value: (.*) g1 references that matched data This is cool. But what do we do when we need to recover the value and we donât have the same Workergroup to run it back through? Letâs go through the whole process. Create the Key and Keep the Value Visit Settings â Security â Encryption Keys and click add. I used the aes-256-cbc algorithm, the local KMS, and specified a key class of 0. I did not specify an Expiration time, nor did I use an initialization vector. With no vector specified, the default value is 16 hex 0s.Important: When you click save you will be presented with the encryption key. This is your only chance to save this key. Without it, you wonât be able to extract encrypted values outside of this worker group. Save the key value in a safe place.Note the key ID after saving as well. Weâll need to reference that. (But it will be available any time, unlike the value.) Create Some Test Data To make this easy, I created sample encrypted data by popping up a filter edit screen. You can do this in many places. In this case, I went to routes, chose the first one, and popped out the filter editor. Then I added the C.Crypto call as mentioned above to test the output: The output string contains the key ID. The actual encrypted value is contained after the colons, and before the ending hash. It is base64 encoded, but not padded as will be required for the decode. Weâll take care of that in the python code. Decode the Encrypted StringUsing the pycryptodome python library, and Chad Gippityâs help, I created the below script. Key stumbling points: Getting the Crypto module installed. Had to use pycryptodome library. The encrypted value is not padded. Had to add a little if function to do that. The default IV is all zeroes, and had to account for that. from base64 import b64decodefrom binascii import unhexlifyfrom Crypto.Cipher import AESdef decrypt_aes_cbc(encrypted_base64, key_hex, iv): # Decode inputs from hex/base64 formats # before we can unb64 the encrypted data, make sure we're padded correctly missing_padding = len(encrypted_base64) % 4 if missing_padding: encrypted_base64 += '=' * (4 - missing_padding) # then unhexify and unb64 key = unhexlify(key_hex) encrypted = b64decode(encrypted_base64) # Create cipher object and decrypt cipher = AES.new(key, AES.MODE_CBC, iv) plaintext = cipher.decrypt(encrypted) # Remove PKCS7 padding (commonly used in AES-CBC) pad_len = plaintext[-1] if isinstance(pad_len, str): pad_len = ord(pad_len) # Py2 compat, rarely needed plaintext = plaintext[:-pad_len] return plaintext.decode('utf-8')# Example usageif __name__ == "__main__": # Replace with your actual Cribl values: encrypted_base64 = "VPuxLjf4ByJgja2GLwsXXQ" key_hex = "0a31961cfbb20966bc2931b38788a07a86a845b6fb4f2c2398d54b9c2618a43f" iv_hex = b'\x00' * 16 result = decrypt_aes_cbc(encrypted_base64, key_hex, iv_hex) print("Decrypted plaintext:\n", result) And the output: bash-3.2$ python3 test.py Decrypted plaintext: test Hurray! We got the âtestâ string. Obviously this isnât very useful as it is, but as a POC and sample of the mechanism, it should provide a starting point to create a more robust tool.Hope this helps!Â
Executive SummaryMicrosoft is retiring the HTTP Data Collector API on September 14, 2026. This affects custom tables (_CL suffix) using the Cribl Azure Monitor destination. This guide provides a step-by-step guide with DCR Automation solution as the primary migration path to the modern Sentinel destination using Azure Logs Ingestion API.đ Quick Migration PathUse the DCR Automation Tool# 1. Get the automation toolgit clone https://github.com/criblio/Cribl-Microsoft.gitcd Cribl-Microsoft/Azure/CustomDeploymentTemplates/DCR-Automation# 2. Important: Read the QUICK_START.md file first and Configure filesCribl-Microsoft/Azure/CustomDeploymentTemplates/DCR-Automation/QUICK_START.md#Update/Reviewazure-parameters.jsonNativeTableList.jsonCustomTableList.json# 3. Run.\Run-DCRAutomation.ps1# Select [4] for Custom Tables with Direct DCRs# 4. Review and copy Cribl configs from cribl-dcr-configs/destinations/Why This Migration is RequiredCustom Type: Custom table (Classic) need migration prior to the retirement of the HTTP Log Ingestion API. The new Logs Ingestion API provides:Enhanced Security: OAuth-based authentication vs shared keys Data Transformations: KQL-based filtering and modification Granular RBAC: Fine-grained access control Schema Control: Prevents uncontrolled column creation Better Performance: Optimized ingestion pipelinePrerequisites Azure Environment: Log Analytics workspace with contributor rights Permissions to create Data Collection Rules (DCRs) PowerShell 5.1+ with Az PowerShell modules Ability to execute Powershell (Exceution Policy Override) to interact with Azure objects Cribl Environment: Cribl Stream with existing Azure Monitor destinations Custom tables using Azure Monitor Tile DCR Automation Tool: git clone https://github.com/criblio/Cribl-Microsoft.gitcd Cribl-Microsoft/Azure/CustomDeploymentTemplates/DCR-Automation How DCR Automation Handles Table Migrationâ AUTOMATED: The DCR Automation tool now automatically handles table migration!What the Automation DoesDetects table types automatically (Classic vs Modern) Migrates Classic tables to DCR-based when needed Creates DCRs for all compatible tables Exports Cribl configurations ready for importTable Types (Handled Automatically)Type Old API New API Automation Action Custom Table (Classic) = MMA type â â â Auto-migrates to DCR-based and creates DCR Custom Table = DCR Type â (until 2026) â â Creates DCRs directly Step-by-Step MigrationStep 1: Inventory Custom TablesIn Azure Portal â Log Analytics â Tables:Filter by _CL or Custom type Note table names for configuration Type column reference: "Custom Table" = â Ready for DCR creation "Custom Table (Classic)" = â Will be auto-migrated by automation Step 2: Create Azure App RegistrationAzure Active Directory â App registrations â New registration Configure: Name: cribl-sentinel-connector Single tenant Save: Application ID, Directory ID Create client secret and copy immediatelyStep 3: Configure DCR AutomationEdit azure-parameters.json:{ "resourceGroupName": "your-rg-name", "workspaceName": "your-workspace", "location": "eastus", "dcrPrefix": "dcr-cribl-", "tenantId": "your-tenant-id", "clientId": "your-app-client-id", "clientSecret": "your-app-secret"}Edit CustomTableList.json with your custom tables (automation handles migration):[ "FirewallLogs_CL", "ApplicationLogs_CL", "CloudFlare_CL"]Step 4: Run DCR Automation (Handles Migration + DCR Creation)# Connect to AzureConnect-AzAccount# Option 1: Interactive Menu (Recommended).\Run-DCRAutomation.ps1# Select [4] "Deploy DCR (Custom Direct)"# Option 2: Command Line.\Run-DCRAutomation.ps1 -Mode DirectCustomWhat this does automatically:â Detects table types (Classic vs Modern) â Migrates Classic tables to DCR-based format â Captures schemas from Azure â Creates DCRs for each table â Exports Cribl configurations to cribl-dcr-configs/ â Generates individual destination filesStep 5: Assign DCR PermissionsGrant app registration access to each DCR from the portal or through your standard change process.Step 6: Configure Cribl Stream Import Destination Configs: Navigate to Manage â Data â Destinations Add Microsoft Sentinel destination for each table Use configs from cribl-dcr-configs/destinations/ Update secret in Authorization section with your App registration secret Update Pipelines:Critical Step: The output from Cribl must match the DCR schema for data to be accepted Review Cribl Packs Dispensary for Examples to get started Ensure pipeline output matches DCR schema Review Cribl Packs Dispensary for Sentinel content Step 7: Test and Validate// Check data flowYourTable_CL| where TimeGenerated > ago(1h)| summarize Count = count(), LastRecord = max(TimeGenerated)| extend Status = iff(Count > 0, "â Data flowing", "â No data")Step 8: Gradual CutoverWeek 1: Test with 10% traffic Week 2: Increase to 50% Week 3: Full cutover to Sentinel destinations Week 4: Remove old Azure Monitor destinationsDCR Automation Menu OptionsWhen running .\Run-DCRAutomation.ps1:đ DEPLOYMENT OPTIONS: [1] ⥠Quick Deploy (both Native + Custom) [2] Deploy DCR (Native Direct) [3] Deploy DCR (Native w/DCE) [4] Deploy DCR (Custom Direct) â For custom table migration [5] Deploy DCR (Custom w/DCE) For most migrations: Choose [4] Custom Direct - simplest architectureTroubleshootingIssue Solution "Cannot create DCR" Check table exists and automation has proper permissions "Table migration failed" Automation will retry; ensure workspace contributor access "No data via new API" Check IAM roles on DCR and app registration "Schema mismatch" Update pipeline to match DCR schema "Classic table detected" â Normal - automation will migrate automatically Timeline for MigrationWeek 1: Inventory tables and create app registration Run DCR automation (handles table migration + DCR creation) Configure Cribl and test with limited traffic Note: Pipelines will need to transform Cribl output to match table schema Week 2-3: Gradual cutover to new destinations Before Sept 2026: Complete migration before API retirementKey Benefits of DCR Automationâ  Fully Automated Migration: Automatically migrates Classic tables to DCR-basedâ  Single Solution: Handles table migration, DCR creation, and Cribl config exportâ  Automatic Schema Detection: No manual schema definition neededâ  Cribl Integration: Exports ready-to-use destination configurationsâ  Interactive Menu: Guided deployment with confirmation promptsâ  Template Generation: Creates ARM templates for CI/CD scenariosâ  Error Handling: Comprehensive validation and user guidanceâ  Smart Detection: Identifies table types and applies appropriate migration strategySupportPipeline Transformation Support: Reach out to your account team Tool Issues: James Pederson jpederson@cribl.io Community: Cribl SlackđŻ Summary: The DCR Automation tool is your complete migration solution. It handles the complexity of Azure API interactions and provides ready-to-use Cribl configurations, making the migration from Azure Monitor to Sentinel destinations straightforward and reliable. Â
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK