Prometheus Exporter

Output metric events to a Prometheus exporter running on the host

status: stable delivery: best effort egress: expose state: stateful previously known as: prometheus

Alias

This component was previously called the prometheus sink. Make sure to update your Vector configuration to accommodate the name change:

[sinks.prometheus_exporter]
+type = "prometheus_exporter"
-type = "prometheus"

Warnings

High cardinality metric names and labels are discouraged by Prometheus as they can provide performance and reliability problems. You should consider alternative strategies to reduce the cardinality. Vector offers a tag_cardinality_limit transform as a way to protect against this.

Configuration

Example configurations

{
  "sinks": {
    "my_sink_id": {
      "type": "prometheus_exporter",
      "inputs": "my-source-or-transform-id",
      "address": "0.0.0.0:9598",
      "default_namespace": "service"
    }
  }
}
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = "my-source-or-transform-id"
address = "0.0.0.0:9598"
default_namespace = "service"
---
sinks:
  my_sink_id:
    type: prometheus_exporter
    inputs: my-source-or-transform-id
    address: 0.0.0.0:9598
    default_namespace: service
{
  "sinks": {
    "my_sink_id": {
      "type": "prometheus_exporter",
      "inputs": "my-source-or-transform-id",
      "address": "0.0.0.0:9598",
      "buckets": 0.005,
      "flush_period_secs": 60,
      "default_namespace": "service",
      "quantiles": 0.5
    }
  }
}
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = "my-source-or-transform-id"
address = "0.0.0.0:9598"
buckets = 0.005
flush_period_secs = 60
default_namespace = "service"
quantiles = 0.5
---
sinks:
  my_sink_id:
    type: prometheus_exporter
    inputs: my-source-or-transform-id
    address: 0.0.0.0:9598
    buckets: 0.005
    flush_period_secs: 60
    default_namespace: service
    quantiles: 0.5

address

required string
The address to expose for scraping.

buckets

optional [float]
Default buckets to use for aggregating distribution metrics into histograms.
Array float
Examples
[
  0.005,
  0.01
]
default: [0.005 0.01 0.025 0.05 0.1 0.25 0.5 1 2.5 5 10]

default_namespace

common optional string
Used as a namespace for metrics that don’t have it. Typically namespaces are set during ingestion (sources), but it is optional and when missing, we’ll use this value. It should follow Prometheus naming conventions.

flush_period_secs

optional uint
Time interval between set values are reset.
default: 60 (seconds)

inputs

required [string]

A list of upstream source or transform IDs. Wildcards (*) are supported but must be the last character in the ID.

See configuration for more info.

Array string literal
Examples
[
  "my-source-or-transform-id",
  "prefix-*"
]

quantiles

optional [float]
Quantiles to use for aggregating distribution metrics into a summary.
Array float
Examples
[
  0.5,
  0.75,
  0.9,
  0.95,
  0.99
]
default: [0.5 0.75 0.9 0.95 0.99]

Telemetry

Metrics

link

events_in_total

counter
The number of events accepted by this component either from tagged origin like file and uri, or cumulatively from other origins.
component_kind required
The Vector component kind.
component_name required
The Vector component name.
component_type required
The Vector component type.
container_name optional
The name of the container from which the event originates.
file optional
The file from which the event originates.
host required
The hostname of the system Vector is running on.
mode optional
The connection mode used by the component.
peer_addr optional
The IP from which the event originates.
peer_path optional
The pathname from which the event originates.
pid required
The process ID of the Vector instance.
pod_name optional
The name of the pod from which the event originates.
uri optional
The sanitized URI from which the event originates.

events_out_total

counter
The total number of events emitted by this component.
component_kind required
The Vector component kind.
component_name required
The Vector component name.
component_type required
The Vector component type.
host required
The hostname of the system Vector is running on.
pid required
The process ID of the Vector instance.

Examples

Counter

Given this event...
{
  "metric": {
    "counter": {
      "value": 1.5
    },
    "kind": "incremental",
    "name": "logins",
    "tags": {
      "host": "my-host.local"
    }
  }
}
...and this configuration...
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
default_namespace = "service"
---
sinks:
  my_sink_id:
    type: prometheus_exporter
    inputs:
      - my-source-or-transform-id
    default_namespace: service
{
  "sinks": {
    "my_sink_id": {
      "type": "prometheus_exporter",
      "inputs": [
        "my-source-or-transform-id"
      ],
      "default_namespace": "service"
    }
  }
}
...this Vector event is produced:
# HELP service_logins logins
# TYPE service_logins counter
service_logins{host="my-host.local"} 1.5

Gauge

Given this event...
{
  "metric": {
    "gauge": {
      "value": 1.5
    },
    "kind": "absolute",
    "name": "memory_rss",
    "namespace": "app",
    "tags": {
      "host": "my-host.local"
    }
  }
}
...and this configuration...
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
---
sinks:
  my_sink_id:
    type: prometheus_exporter
    inputs:
      - my-source-or-transform-id
{
  "sinks": {
    "my_sink_id": {
      "type": "prometheus_exporter",
      "inputs": [
        "my-source-or-transform-id"
      ]
    }
  }
}
...this Vector event is produced:
# HELP app_memory_rss memory_rss
# TYPE app_memory_rss gauge
app_memory_rss{host="my-host.local"} 1.5

Histogram

Given this event...
{
  "metric": {
    "histogram": {
      "buckets": [
        {
          "count": 0,
          "upper_limit": 0.005
        },
        {
          "count": 1,
          "upper_limit": 0.01
        },
        {
          "count": 0,
          "upper_limit": 0.025
        },
        {
          "count": 1,
          "upper_limit": 0.05
        },
        {
          "count": 0,
          "upper_limit": 0.1
        },
        {
          "count": 0,
          "upper_limit": 0.25
        },
        {
          "count": 0,
          "upper_limit": 0.5
        },
        {
          "count": 0,
          "upper_limit": 1
        },
        {
          "count": 0,
          "upper_limit": 2.5
        },
        {
          "count": 0,
          "upper_limit": 5
        },
        {
          "count": 0,
          "upper_limit": 10
        }
      ],
      "count": 2,
      "sum": 0.789
    },
    "kind": "absolute",
    "name": "response_time_s",
    "tags": {}
  }
}
...and this configuration...
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
---
sinks:
  my_sink_id:
    type: prometheus_exporter
    inputs:
      - my-source-or-transform-id
{
  "sinks": {
    "my_sink_id": {
      "type": "prometheus_exporter",
      "inputs": [
        "my-source-or-transform-id"
      ]
    }
  }
}
...this Vector event is produced:
# HELP response_time_s response_time_s
# TYPE response_time_s histogram
response_time_s_bucket{le="0.005"} 0
response_time_s_bucket{le="0.01"} 1
response_time_s_bucket{le="0.025"} 0
response_time_s_bucket{le="0.05"} 1
response_time_s_bucket{le="0.1"} 0
response_time_s_bucket{le="0.25"} 0
response_time_s_bucket{le="0.5"} 0
response_time_s_bucket{le="1.0"} 0
response_time_s_bucket{le="2.5"} 0
response_time_s_bucket{le="5.0"} 0
response_time_s_bucket{le="10.0"} 0
response_time_s_bucket{le="+Inf"} 0
response_time_s_sum 0.789
response_time_s_count 2

Distribution to histogram

Given this event...
{
  "metric": {
    "distribution": {
      "samples": [
        {
          "rate": 4,
          "value": 0
        },
        {
          "rate": 2,
          "value": 1
        },
        {
          "rate": 1,
          "value": 4
        }
      ],
      "statistic": "histogram"
    },
    "kind": "incremental",
    "name": "request_retries",
    "tags": {
      "host": "my-host.local"
    }
  }
}
...and this configuration...
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
buckets = [ 0, 1, 3 ]
---
sinks:
  my_sink_id:
    type: prometheus_exporter
    inputs:
      - my-source-or-transform-id
    buckets:
      - 0
      - 1
      - 3
{
  "sinks": {
    "my_sink_id": {
      "type": "prometheus_exporter",
      "inputs": [
        "my-source-or-transform-id"
      ],
      "buckets": [
        0,
        1,
        3
      ]
    }
  }
}
...this Vector event is produced:
# HELP request_retries request_retries
# TYPE request_retries histogram
request_retries_bucket{host="my-host.local",le="0"} 4
request_retries_bucket{host="my-host.local",le="1"} 6
request_retries_bucket{host="my-host.local",le="3"} 6
request_retries_bucket{host="my-host.local",le="+Inf"} 7
request_retries_sum{host="my-host.local"} 6
request_retries_count{host="my-host.local"} 7

Distribution to summary

Given this event...
{
  "metric": {
    "distribution": {
      "samples": [
        {
          "rate": 3,
          "value": 0
        },
        {
          "rate": 2,
          "value": 1
        },
        {
          "rate": 1,
          "value": 4
        }
      ],
      "statistic": "summary"
    },
    "kind": "incremental",
    "name": "request_retries",
    "tags": {}
  }
}
...and this configuration...
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
quantiles = [ 0.5, 0.75, 0.95 ]
---
sinks:
  my_sink_id:
    type: prometheus_exporter
    inputs:
      - my-source-or-transform-id
    quantiles:
      - 0.5
      - 0.75
      - 0.95
{
  "sinks": {
    "my_sink_id": {
      "type": "prometheus_exporter",
      "inputs": [
        "my-source-or-transform-id"
      ],
      "quantiles": [
        0.5,
        0.75,
        0.95
      ]
    }
  }
}
...this Vector event is produced:
# HELP request_retries request_retries
# TYPE request_retries summary
request_retries{quantile="0.5"} 0
request_retries{quantile="0.75"} 1
request_retries{quantile="0.95"} 4
request_retries_sum 6
request_retries_count 6
request_retries_min 0
request_retries_max 4
request_retries_avg 1

Summary

Given this event...
{
  "metric": {
    "kind": "absolute",
    "name": "requests",
    "summary": {
      "count": 6,
      "quantiles": [
        {
          "upper_limit": 0.01,
          "value": 1.5
        },
        {
          "upper_limit": 0.5,
          "value": 2
        },
        {
          "upper_limit": 0.99,
          "value": 3
        }
      ],
      "sum": 12
    },
    "tags": {
      "host": "my-host.local"
    }
  }
}
...and this configuration...
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
---
sinks:
  my_sink_id:
    type: prometheus_exporter
    inputs:
      - my-source-or-transform-id
{
  "sinks": {
    "my_sink_id": {
      "type": "prometheus_exporter",
      "inputs": [
        "my-source-or-transform-id"
      ]
    }
  }
}
...this Vector event is produced:
# HELP requests requests
# TYPE requests summary
requests{host="my-host.local",quantile="0.01"} 1.5
requests{host="my-host.local",quantile="0.5"} 2
requests{host="my-host.local",quantile="0.99"} 3
requests_sum{host="my-host.local"} 12
requests_count{host="my-host.local"} 6

How it works

Histogram Buckets

Choosing the appropriate buckets for Prometheus histograms is a complicated point of discussion. The Histograms and Summaries Prometheus guide provides a good overview of histograms, buckets, summaries, and how you should think about configuring them. The buckets you choose should align with your known range and distribution of values as well as how you plan to report on them. The aforementioned guide provides examples on how you should align them.

Default Buckets

The buckets option defines the global default buckets for histograms. These defaults are tailored to broadly measure the response time (in seconds) of a network service. Most likely, however, you will be required to define buckets customized to your use case.

Memory Usage

Like other Prometheus instances, the prometheus sink aggregates metrics in memory which keeps the memory footprint to a minimum if Prometheus fails to scrape the Vector instance over an extended period of time. The downside is that data will be lost if Vector is restarted. This is by design of Prometheus' pull model approach, but is worth noting if restart Vector frequently.

State

This component is stateful, meaning its behavior changes based on previous inputs (events). State is not preserved across restarts, therefore state-dependent behavior will reset between restarts and depend on the inputs (events) received since the most recent restart.