Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 19 additions & 14 deletions config/_default/menus/main.en.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5918,76 +5918,81 @@ menu:
parent: observability_pipelines_destinations
identifier: observability_pipelines_datadog_logs
weight: 407
- name: Datadog Metrics
url: observability_pipelines/destinations/datadog_metrics/
parent: observability_pipelines_destinations
identifier: observability_pipelines_datadog_metrics
weight: 408
- name: Elasticsearch
url: observability_pipelines/destinations/elasticsearch/
parent: observability_pipelines_destinations
identifier: observability_pipelines_elasticsearch
weight: 408
weight: 409
- name: Google Chronicle
url: observability_pipelines/destinations/google_chronicle
parent: observability_pipelines_destinations
identifier: observability_pipelines_google_chronicle
weight: 409
weight: 410
- name: Google Cloud Storage
identifier: observability_pipelines_google_cloud_storage
url: /observability_pipelines/destinations/google_cloud_storage/
parent: observability_pipelines_destinations
weight: 410
weight: 411
- name: Google Pub/Sub
identifier: observability_pipelines_google_pubsub
url: /observability_pipelines/destinations/google_pubsub/
parent: observability_pipelines_destinations
weight: 411
weight: 412
- name: HTTP Client
url: observability_pipelines/destinations/http_client/
parent: observability_pipelines_destinations
identifier: observability_pipelines_http_client
weight: 412
weight: 413
- name: Kafka
url: observability_pipelines/destinations/kafka/
parent: observability_pipelines_destinations
identifier: observability_pipelines_kafka
weight: 413
weight: 414
- name: Microsoft Sentinel
identifier: observability_pipelines_microsoft_sentinel
url: /observability_pipelines/destinations/microsoft_sentinel/
parent: observability_pipelines_destinations
weight: 414
weight: 415
- name: New Relic
identifier: observability_pipelines_new_relic
url: /observability_pipelines/destinations/new_relic/
parent: observability_pipelines_destinations
weight: 415
weight: 416
- name: OpenSearch
url: observability_pipelines/destinations/opensearch
parent: observability_pipelines_destinations
identifier: observability_pipelines_opensearch
weight: 416
weight: 417
- name: SentinelOne
url: observability_pipelines/destinations/sentinelone
parent: observability_pipelines_destinations
identifier: observability_pipelines_sentinelone
weight: 417
weight: 418
- name: Socket
url: observability_pipelines/destinations/socket
parent: observability_pipelines_destinations
identifier: observability_pipelines_socket
weight: 418
weight: 419
- name: Splunk HEC
url: observability_pipelines/destinations/splunk_hec
parent: observability_pipelines_destinations
identifier: observability_pipelines_splunk_hec
weight: 419
weight: 420
- name: Sumo Logic Hosted Collector
url: observability_pipelines/destinations/sumo_logic_hosted_collector
parent: observability_pipelines_destinations
identifier: observability_pipelines_sumo_logic_hosted_collector
weight: 420
weight: 421
- name: Syslog
url: observability_pipelines/destinations/syslog
parent: observability_pipelines_destinations
identifier: observability_pipelines_syslog
weight: 421
weight: 422
- name: Packs
url: observability_pipelines/packs/
parent: observability_pipelines
Expand Down
101 changes: 94 additions & 7 deletions content/en/observability_pipelines/configuration/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,98 @@ Observability Pipelines lets you collect and process logs and metrics ({{< toolt

Build and deploy pipelines to collect, transform, and route your data using one of these methods:

- Pipeline UI
- [API][4]
- [Terraform][5]
- [Pipeline UI][4]
- [API][5]
- [Terraform][6]

See [Set Up Pipelines][6] for source, processor, and destination configuration details.
## Pipeline types

There are two types of pipelines:

{{< tabs >}}
{{% tab "Logs" %}}

Use one of the [logs templates][1] to create a log pipeline.

- Archive Logs
- Dual Ship Logs
- Generate Log-based Metrics
- Log Enrichment
- Log Volume Control
- Sensitive Data Redaction
- Split Logs

See [Set Up Pipelines][2] for more information on setting up a source, processors and destinations.

[1]: /observability_pipelines/configuration/explore_templates/?tab=logs#templates
[2]: /observability_pipelines/configuration/set_up_pipelines/

{{% /tab %}}

{{% tab "Metrics" %}}

<div class="alert alert-info">
Metrics Volume and Cardinality Control is in Preview. Fill out the <a href="https://www.datadoghq.com/product-preview/metrics-ingestion-and-cardinality-control-in-observability-pipelines/">form</a> to request access.</div>

Use the [Metrics Volume and Cardinality Control][1] template to create a metrics pipeline.

See [Set Up Pipelines][2] for more information on setting up a source, processors and destination.

#### Metrics data

Metrics sent to Observability Pipelines include the following:

- `name`: The metric name.
- `kind`: There are two kinds of metrics:
- `absolute` metrics: Represents the current value of a measurement at the time it is reported.
- `incremental` metrics: Represents the change in a measurement since the last reported value, which the system aggregates over time.
- `value`: The [metric type](#metric-types):
- `counter`
- `gauge`
- `distribution`
- `histogram`
- `timestamp`: The date and time the metric is created.
- `tags`: Includes tags such as `host`.

The `counter` metric type is the only `incremental` metric. `gauge`, `distribution`, and `histogram` metric types are `absolute` metrics.

An example of a metric:

```
{
"name":"datadog.agent.retry_queue_duration.bytes_per_sec",
"tags":{
"agent":"core",
"domain":"https://7-72-3-app.agent.datadoghq.com",
"host":"COMP-YGVQDJG75L",
"source_type_name":"System",
"env:prod"
},
"timestamp":"2025-11-28T13:03:09Z",
"kind":"absolute",
"gauge":{"value":454.1372767857143}
}
```

#### Metric types

The available metric types:

| Metric type | Description | Example |
| ----------- | ----------- | ------- |
| COUNTER | Represents the total number of event occurrences in one time interval. This value can be reset to zero, but cannot be decreased. | You want to count the number of logs with `status:error`. |
| GAUGE | Represents a snapshot of events in one time interval. | You want to measure the latest CPU utilization per host for all logs in the production environment. |
| DISTRIBUTION | Represent the global statistical distribution of a set of values calculated across your entire distributed infrastructure in one time interval. | You want to measure the average time it takes for an API call to be made. |
| HISTOGRAM | Represents the statistical distribution of a set of values calculated in one time interval. | You want to measure response time distributions for a service or endpoint. |

See [Metric Types][3] for more information.

[1]: /observability_pipelines/configuration/explore_templates/?tab=metrics#metrics-volume-and-cardinality-control
[2]: /observability_pipelines/configuration/set_up_pipelines/
[3]: /metrics/types/?tab=gauge#metric-types

{{% /tab %}}
{{< /tabs >}}

## Further reading

Expand All @@ -41,6 +128,6 @@ Build and deploy pipelines to collect, transform, and route your data using one
[1]: /observability_pipelines/sources/
[2]: /observability_pipelines/processors/
[3]: /observability_pipelines/destinations/
[4]: /api/latest/observability-pipelines/#create-a-new-pipeline
[5]: https://registry.terraform.io/providers/DataDog/datadog/latest/docs
[6]: /observability_pipelines/configuration/set_up_pipelines/
[4]: https://app.datadoghq.com/observability-pipelines
[5]: /api/latest/observability-pipelines/#create-a-new-pipeline
[6]: https://registry.terraform.io/providers/DataDog/datadog/latest/docs
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ Set up your pipelines and its sources, processors, and destinations in the Obser
If you want to add another group of processors for a destination:
1. Click the plus sign (**+**) at the bottom of the existing processor group.
1. Click the name of the processor group to update it.
1. Optionally, enter a group filter. See [Filter Syntax][17] for more information.
1. Optionally, enter a group filter. See [Search Syntax][17] for more information.
1. Click **Add** to add processors to the group.
1. If you want to copy all processors in a group and paste them into the same processor group or a different group:
1. Click the three dots on the processor group.
Expand Down Expand Up @@ -125,7 +125,7 @@ After you have set up your pipeline, see [Update Existing Pipelines][11] if you
[14]: /monitors/types/metric/
[15]: /observability_pipelines/guide/environment_variables/
[16]: /observability_pipelines/configuration/install_the_worker/advanced_worker_configurations/#bootstrap-options
[17]: /observability_pipelines/processors/#filter-query-syntax
[17]: /observability_pipelines/search_syntax/

{{% /tab %}}
{{% tab "API" %}}
Expand Down
62 changes: 62 additions & 0 deletions content/en/observability_pipelines/destinations/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,68 @@ Use the Observability Pipelines Worker to send your processed logs and metrics (

Select a destination in the left navigation menu to see more information about it.

## Destinations

These are the available destinations:

{{< tabs >}}
{{% tab "Logs" %}}

- [Amazon OpenSearch][1]
- [Amazon S3][2]
- [Amazon Security Lake][3]
- [Azure Storage][4]
- [Datadog CloudPrem][5]
- [CrowdStrike Next-Gen SIEM][6]
- [Datadog Logs][7]
- [Elasticsearch][8]
- [Google Chronicle][9]
- [Google Cloud Storage][10]
- [Google Pub/Sub][11]
- [HTTP Client][12]
- [Kafka][13]
- [Microsoft Sentinel][14]
- [New Relic][15]
- [OpenSearch][16]
- [SentinelOne][17]
- [Socket][18]
- [Splunk HTTP Event Collector (HEC)][19]
- [Sumo Logic Hosted Collector][20]
- [Syslog][21]

[1]: /observability_pipelines/destinations/amazon_opensearch/
[2]: /observability_pipelines/destinations/amazon_s3/
[3]: /observability_pipelines/destinations/amazon_security_lake/
[4]: /observability_pipelines/destinations/azure_storage/
[5]: /observability_pipelines/destinations/cloudprem/
[6]: /observability_pipelines/destinations/crowdstrike_ng_siem/
[7]: /observability_pipelines/destinations/datadog_logs/
[8]: /observability_pipelines/destinations/elasticsearch/
[9]: /observability_pipelines/destinations/google_chronicle/
[10]: /observability_pipelines/destinations/google_cloud_storage/
[11]: /observability_pipelines/destinations/google_pubsub/
[12]: /observability_pipelines/destinations/http_client/
[13]: /observability_pipelines/destinations/kafka/
[14]: /observability_pipelines/destinations/microsoft_sentinel/
[15]: /observability_pipelines/destinations/new_relic/
[16]: /observability_pipelines/destinations/opensearch/
[17]: /observability_pipelines/destinations/sentinelone/
[18]: /observability_pipelines/destinations/socket/
[19]: /observability_pipelines/destinations/splunk_hec/
[20]: /observability_pipelines/destinations/sumo_logic_hosted_collector/
[21]: /observability_pipelines/destinations/syslog/

{{% /tab %}}

{{% tab "Metrics" %}}

- [Datadog Metrics][1]

[1]: /observability_pipelines/destinations/datadog_metrics/

{{% /tab %}}
{{< /tabs >}}

## Template syntax

Logs are often stored in separate indexes based on log data, such as the service or environment the logs are coming from or another log attribute. In Observability Pipelines, you can use template syntax to route your logs to different indexes based on specific log fields.
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
title: Datadog Metrics
description: Learn how to set up the Datadog Metrics destination.
disable_toc: false
---

Use Observability Pipelines' Datadog Metrics destination to send metrics to Datadog. You can also use [AWS PrivateLink](#aws-privatelink) to send metrics from Observability Pipelines to Datadog.

## Setup

Set up the Datadog Metrics destination and its environment variables when you [set up a pipeline][1]. The information below is configured in the pipelines UI.

{{< img src="observability_pipelines/destinations/datadog_metrics_settings.png" alt="The Datadog Metrics destination settings" style="width:40%;" >}}

### Set up the destination

Optionally, toggle the switch to enable Buffering Options.
**Note**: Buffering options is in {{< tooltip glossary="preview" case="title" >}}. Contact your account manager to request access.

- If left disabled, the maximum size for buffering is 500 events.
- If enabled:
- Select the buffer type you want to set (Memory or Disk).
- Enter the buffer size and select the unit.

### Set the environment variables

{{% observability_pipelines/configure_existing_pipelines/destination_env_vars/datadog %}}

## How the destination works

A batch of events is flushed when one of these parameters is met. See [event batching][2] for more information.

| Max Events | Max Bytes | Timeout (seconds) |
|----------------|-----------------|---------------------|
| 100,000 | None | 2 |

## AWS PrivateLink

To send logs from Observability Pipelines to Datadog using AWS PrivateLink, see [Connect to Datadog over AWS PrivateLink][3] for setup instructions. The two endpoints you need to set up are:

- Metrics: {{< region-param key=metrics_endpoint_private_link code="true" >}}
- Remote Configuration: {{< region-param key=remote_config_endpoint_private_link code="true" >}}

**Note**: The `obpipeline-intake.datadoghq.com` endpoint is used for Live Capture and is not available as a PrivateLink endpoint.

[1]: https://app.datadoghq.com/observability-pipelines
[2]: https://docs.datadoghq.com/observability_pipelines/destinations/#event-batching
[3]: https://docs.datadoghq.com/agent/guide/private-link/?tab=crossregionprivatelinkendpoints
[4]: http://config.datadoghq.com
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,10 @@ Some Observability Pipelines components require setting up environment variables
### CrowdStrike NG-SIEM
{{% observability_pipelines/configure_existing_pipelines/destination_env_vars/crowdstrike_ng_siem %}}

### Datadog
### Datadog Logs
{{% observability_pipelines/configure_existing_pipelines/destination_env_vars/datadog %}}

### Datadog Metrics
{{% observability_pipelines/configure_existing_pipelines/destination_env_vars/datadog %}}

### Datadog Archives
Expand Down
Loading
Loading