InfluxDB metrics
Deliver metric event data to InfluxDB
Configuration
Example configurations
{
"sinks": {
"my_sink_id": {
"type": "influxdb_metrics",
"inputs": [
"my-source-or-transform-id"
],
"bucket": "vector-bucket",
"consistency": "any",
"database": "vector-database",
"endpoint": "http://localhost:8086/",
"org": "my-org",
"password": "${INFLUXDB_PASSWORD}",
"retention_policy_name": "autogen",
"token": "${INFLUXDB_TOKEN}",
"default_namespace": "service",
"username": "todd"
}
}
}
[sinks.my_sink_id]
type = "influxdb_metrics"
inputs = [ "my-source-or-transform-id" ]
bucket = "vector-bucket"
consistency = "any"
database = "vector-database"
endpoint = "http://localhost:8086/"
org = "my-org"
password = "${INFLUXDB_PASSWORD}"
retention_policy_name = "autogen"
token = "${INFLUXDB_TOKEN}"
default_namespace = "service"
username = "todd"
---
sinks:
my_sink_id:
type: influxdb_metrics
inputs:
- my-source-or-transform-id
bucket: vector-bucket
consistency: any
database: vector-database
endpoint: http://localhost:8086/
org: my-org
password: ${INFLUXDB_PASSWORD}
retention_policy_name: autogen
token: ${INFLUXDB_TOKEN}
default_namespace: service
username: todd
{
"sinks": {
"my_sink_id": {
"type": "influxdb_metrics",
"inputs": [
"my-source-or-transform-id"
],
"bucket": "vector-bucket",
"consistency": "any",
"database": "vector-database",
"endpoint": "http://localhost:8086/",
"org": "my-org",
"password": "${INFLUXDB_PASSWORD}",
"retention_policy_name": "autogen",
"token": "${INFLUXDB_TOKEN}",
"default_namespace": "service",
"username": "todd",
"tags": {
"region": "us-west-1"
}
}
}
}
[sinks.my_sink_id]
type = "influxdb_metrics"
inputs = [ "my-source-or-transform-id" ]
bucket = "vector-bucket"
consistency = "any"
database = "vector-database"
endpoint = "http://localhost:8086/"
org = "my-org"
password = "${INFLUXDB_PASSWORD}"
retention_policy_name = "autogen"
token = "${INFLUXDB_TOKEN}"
default_namespace = "service"
username = "todd"
[sinks.my_sink_id.tags]
region = "us-west-1"
---
sinks:
my_sink_id:
type: influxdb_metrics
inputs:
- my-source-or-transform-id
bucket: vector-bucket
consistency: any
database: vector-database
endpoint: http://localhost:8086/
org: my-org
password: ${INFLUXDB_PASSWORD}
retention_policy_name: autogen
token: ${INFLUXDB_TOKEN}
default_namespace: service
username: todd
tags:
region: us-west-1
acknowledgements
common optional objectacknowledgement
settings.acknowledgements.enabled
common optional boolfalse
batch
optional objectbatch.max_bytes
common optional uintbatch.max_events
common optional uintbatch.timeout_secs
common optional float1
(seconds)bucket
required string literalbuffer
optional objectConfigures the buffering behavior for this sink.
More information about the individual buffer types, and buffer behavior, can be found in the Buffering Model section.
buffer.max_events
optional uinttype = "memory"
500
buffer.max_size
required uintThe maximum size of the buffer on disk.
Must be at least ~256 megabytes (268435488 bytes).
type = "disk"
buffer.type
optional string literal enumOption | Description |
---|---|
disk | Events are buffered on disk. (version 2) This is less performant, but more durable. Data that has been synchronized to disk will not be lost if Vector is restarted forcefully or crashes. Data is synchronized to disk every 500ms. |
memory | Events are buffered in memory. This is more performant, but less durable. Data will be lost if Vector is restarted forcefully or crashes. |
memory
buffer.when_full
optional string literal enumOption | Description |
---|---|
block | Wait for free space in the buffer. This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. |
drop_newest | Drops the event instead of waiting for free space in buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events. |
block
consistency
common optional string literaldatabase
required string literaldefault_namespace
common optional string literalencoding
common optional objectencoding.except_fields
optional [string]encoding.only_fields
optional [string]encoding.timestamp_format
optional string literal enumOption | Description |
---|---|
rfc3339 | Formats as a RFC3339 string |
unix | Formats as a unix timestamp |
rfc3339
endpoint
required string literalhealthcheck
optional objecthealthcheck.enabled
optional booltrue
inputs
required [string]A list of upstream source or transform IDs.
Wildcards (*
) are supported.
See configuration for more info.
org
required string literalpassword
common optional string literalrequest
optional objectrequest.adaptive_concurrency
optional objectrequest.adaptive_concurrency.decrease_ratio
optional float0.9
request.adaptive_concurrency.ewma_alpha
optional float0.7
request.adaptive_concurrency.rtt_deviation_scale
optional float2
request.concurrency
common optional uintrequest.rate_limit_duration_secs
common optional uintrate_limit_num
option.1
(seconds)request.rate_limit_num
common optional uintrate_limit_duration_secs
time window.9.223372036854776e+18
request.retry_attempts
optional uint1.8446744073709552e+19
request.retry_initial_backoff_secs
optional uint1
(seconds)request.retry_max_duration_secs
optional uint3600
(seconds)request.timeout_secs
common optional uint60
(seconds)retention_policy_name
common optional string literaltags
optional objecttls
optional objecttls.alpn_protocols
optional [string]Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. Prioritized in the order they are defined.
tls.ca_file
optional string literalAbsolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
tls.crt_file
common optional string literalAbsolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
tls.key_file
common optional string literalAbsolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
tls.key_pass
optional string literalPassphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
tls.verify_certificate
optional boolEnables certificate verification.
If enabled, certificates must be valid in terms of not being expired, as well as being issued by a trusted issuer. This verification operates in a hierarchical manner, checking that not only the leaf certificate (the certificate presented by the client/server) is valid, but also that the issuer of that certificate is valid, and so on until reaching a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
true
tls.verify_hostname
optional boolEnables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
true
token
required string literalusername
common optional string literalTelemetry
Metrics
linkbuffer_byte_size
gaugecomponent_id
instead. The value is the same as component_id
.buffer_discarded_events_total
countercomponent_id
instead. The value is the same as component_id
.buffer_events
gaugecomponent_id
instead. The value is the same as component_id
.buffer_received_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.buffer_received_events_total
countercomponent_id
instead. The value is the same as component_id
.buffer_sent_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.buffer_sent_events_total
countercomponent_id
instead. The value is the same as component_id
.component_received_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_received_events_count
histogramA histogram of the number of events passed in each internal batch in Vector’s internal topology.
Note that this is separate than sink-level batching. It is mostly useful for low level debugging performance issues in Vector due to small internal batches.
component_id
instead. The value is the same as component_id
.component_received_events_total
countercomponent_id
instead. The value is the same as component_id
.component_sent_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_sent_events_total
countercomponent_id
instead. The value is the same as component_id
.events_in_total
countercomponent_received_events_total
instead.component_id
instead. The value is the same as component_id
.utilization
gaugecomponent_id
instead. The value is the same as component_id
.Examples
Counter
Given this event...{
"metric": {
"counter": {
"value": 1.5
},
"kind": "incremental",
"name": "logins",
"tags": {
"host": "my-host.local"
}
}
}
[sinks.my_sink_id]
type = "influxdb_metrics"
inputs = [ "my-source-or-transform-id" ]
default_namespace = "service"
---
sinks:
my_sink_id:
type: influxdb_metrics
inputs:
- my-source-or-transform-id
default_namespace: service
{
"sinks": {
"my_sink_id": {
"type": "influxdb_metrics",
"inputs": [
"my-source-or-transform-id"
],
"default_namespace": "service"
}
}
}
service.logins,metric_type=counter,host=my-host.local value=1.5 1542182950000000011
Distribution
Given this event...{
"metric": {
"distribution": {
"samples": [
{
"rate": 1,
"value": 1
},
{
"rate": 2,
"value": 5
},
{
"rate": 3,
"value": 3
}
],
"statistic": "histogram"
},
"kind": "incremental",
"name": "sparse_stats",
"namespace": "app",
"tags": {
"host": "my-host.local"
}
}
}
[sinks.my_sink_id]
type = "influxdb_metrics"
inputs = [ "my-source-or-transform-id" ]
---
sinks:
my_sink_id:
type: influxdb_metrics
inputs:
- my-source-or-transform-id
{
"sinks": {
"my_sink_id": {
"type": "influxdb_metrics",
"inputs": [
"my-source-or-transform-id"
]
}
}
}
app.sparse_stats,metric_type=distribution,host=my-host.local avg=3.333333,count=6,max=5,median=3,min=1,quantile_0.95=5,sum=20 1542182950000000011
Gauge
Given this event...{
"metric": {
"gauge": {
"value": 1.5
},
"kind": "absolute",
"name": "memory_rss",
"namespace": "app",
"tags": {
"host": "my-host.local"
}
}
}
[sinks.my_sink_id]
type = "influxdb_metrics"
inputs = [ "my-source-or-transform-id" ]
default_namespace = "service"
---
sinks:
my_sink_id:
type: influxdb_metrics
inputs:
- my-source-or-transform-id
default_namespace: service
{
"sinks": {
"my_sink_id": {
"type": "influxdb_metrics",
"inputs": [
"my-source-or-transform-id"
],
"default_namespace": "service"
}
}
}
app.memory_rss,metric_type=gauge,host=my-host.local value=1.5 1542182950000000011
Histogram
Given this event...{
"metric": {
"histogram": {
"buckets": [
{
"count": 2,
"upper_limit": 1
},
{
"count": 5,
"upper_limit": 2.1
},
{
"count": 10,
"upper_limit": 3
}
],
"count": 17,
"sum": 46.2
},
"kind": "absolute",
"name": "requests",
"tags": {
"host": "my-host.local"
}
}
}
[sinks.my_sink_id]
type = "influxdb_metrics"
inputs = [ "my-source-or-transform-id" ]
---
sinks:
my_sink_id:
type: influxdb_metrics
inputs:
- my-source-or-transform-id
{
"sinks": {
"my_sink_id": {
"type": "influxdb_metrics",
"inputs": [
"my-source-or-transform-id"
]
}
}
}
requests,metric_type=histogram,host=my-host.local bucket_1=2i,bucket_2.1=5i,bucket_3=10i,count=17i,sum=46.2 1542182950000000011
Set
Given this event...{
"metric": {
"kind": "incremental",
"name": "users",
"set": {
"values": [
"first",
"another",
"last"
]
},
"tags": {
"host": "my-host.local"
}
}
}
[sinks.my_sink_id]
type = "influxdb_metrics"
inputs = [ "my-source-or-transform-id" ]
---
sinks:
my_sink_id:
type: influxdb_metrics
inputs:
- my-source-or-transform-id
{
"sinks": {
"my_sink_id": {
"type": "influxdb_metrics",
"inputs": [
"my-source-or-transform-id"
]
}
}
}
users,metric_type=set,host=my-host.local value=3 154218295000000001
Summary
Given this event...{
"metric": {
"kind": "absolute",
"name": "requests",
"summary": {
"count": 6,
"quantiles": [
{
"upper_limit": 0.01,
"value": 1.5
},
{
"upper_limit": 0.5,
"value": 2
},
{
"upper_limit": 0.99,
"value": 3
}
],
"sum": 12.1
},
"tags": {
"host": "my-host.local"
}
}
}
[sinks.my_sink_id]
type = "influxdb_metrics"
inputs = [ "my-source-or-transform-id" ]
---
sinks:
my_sink_id:
type: influxdb_metrics
inputs:
- my-source-or-transform-id
{
"sinks": {
"my_sink_id": {
"type": "influxdb_metrics",
"inputs": [
"my-source-or-transform-id"
]
}
}
}
requests,metric_type=summary,host=my-host.local count=6i,quantile_0.01=1.5,quantile_0.5=2,quantile_0.99=3,sum=12.1 1542182950000000011
How it works
Buffers and batches
This component buffers & batches data as shown in the diagram above. You’ll notice that Vector treats these concepts differently, instead of treating them as global concepts, Vector treats them as sink specific concepts. This isolates sinks, ensuring services disruptions are contained and delivery guarantees are honored.
Batches are flushed when 1 of 2 conditions are met:
- The batch age meets or exceeds the configured
timeout_secs
. - The batch size meets or exceeds the configured
max_bytes
ormax_events
.
Buffers are controlled via the buffer.*
options.
Health checks
Require health checks
If you’d like to exit immediately upon a health check failure, you can pass the
--require-healthy
flag:
vector --config /etc/vector/vector.toml --require-healthy
Disable health checks
healthcheck
option to
false
.Rate limits & adaptive concurrency
Adaptive Request Concurrency (ARC)
Adaptive Request Concurrency is a feature of Vector that does away with static concurrency limits and automatically optimizes HTTP concurrency based on downstream service responses. The underlying mechanism is a feedback loop inspired by TCP congestion control algorithms. Checkout the announcement blog post,
We highly recommend enabling this feature as it improves performance and reliability of Vector and the systems it communicates with. As such, we have made it the default, and no further configuration is required.
Static concurrency
If Adaptive Request Concurrency is not for you, you can manually set static concurrency
limits by specifying an integer for request.concurrency
:
[sinks.my-sink]
request.concurrency = 10
Rate limits
In addition to limiting request concurrency, you can also limit the overall request
throughput via the request.rate_limit_duration_secs
and request.rate_limit_num
options.
[sinks.my-sink]
request.rate_limit_duration_secs = 1
request.rate_limit_num = 10
These will apply to both adaptive
and fixed request.concurrency
values.