AWS S3
Store observability events in the AWS S3 object storage system
Configuration
Example configurations
{
"sinks": {
"my_sink_id": {
"type": "aws_s3",
"inputs": [
"my-source-or-transform-id"
],
"bucket": "my-bucket"
}
}
}
[sinks.my_sink_id]
type = "aws_s3"
inputs = [ "my-source-or-transform-id" ]
bucket = "my-bucket"
---
sinks:
my_sink_id:
type: aws_s3
inputs:
- my-source-or-transform-id
bucket: my-bucket
{
"sinks": {
"my_sink_id": {
"type": "aws_s3",
"inputs": [
"my-source-or-transform-id"
],
"acl": "authenticated-read",
"bucket": "my-bucket",
"compression": "gzip",
"content_encoding": "gzip",
"content_type": "application/gzip",
"endpoint": "http://127.0.0.0:5000/path/to/service",
"filename_append_uuid": true,
"filename_extension": "json",
"filename_time_format": "%s",
"grant_full_control": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be",
"grant_read": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be",
"grant_read_acp": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be",
"grant_write_acp": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be",
"key_prefix": "date=%F",
"region": "us-east-1",
"server_side_encryption": "AES256",
"ssekms_key_id": "abcd1234",
"storage_class": "STANDARD",
"tags": {
"Classification": "confidential",
"PHI": "True",
"Project": "Blue"
}
}
}
}
[sinks.my_sink_id]
type = "aws_s3"
inputs = [ "my-source-or-transform-id" ]
acl = "authenticated-read"
bucket = "my-bucket"
compression = "gzip"
content_encoding = "gzip"
content_type = "application/gzip"
endpoint = "http://127.0.0.0:5000/path/to/service"
filename_append_uuid = true
filename_extension = "json"
filename_time_format = "%s"
grant_full_control = "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
grant_read = "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
grant_read_acp = "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
grant_write_acp = "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
key_prefix = "date=%F"
region = "us-east-1"
server_side_encryption = "AES256"
ssekms_key_id = "abcd1234"
storage_class = "STANDARD"
[sinks.my_sink_id.tags]
Classification = "confidential"
PHI = "True"
Project = "Blue"
---
sinks:
my_sink_id:
type: aws_s3
inputs:
- my-source-or-transform-id
acl: authenticated-read
bucket: my-bucket
compression: gzip
content_encoding: gzip
content_type: application/gzip
endpoint: http://127.0.0.0:5000/path/to/service
filename_append_uuid: true
filename_extension: json
filename_time_format: "%s"
grant_full_control: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
grant_read: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
grant_read_acp: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
grant_write_acp: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
key_prefix: date=%F
region: us-east-1
server_side_encryption: AES256
ssekms_key_id: abcd1234
storage_class: STANDARD
tags:
Classification: confidential
PHI: "True"
Project: Blue
acknowledgements
optional objectControls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
acknowledgements.enabled
optional boolWhether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
acl
optional string literal enumCanned ACL to apply to the created objects.
For more information, see Canned ACL.
Option | Description |
---|---|
authenticated-read | Bucket/object can be read by authenticated users. The bucket/object owner is granted the |
aws-exec-read | Bucket/object are private, and readable by EC2. The bucket/object owner is granted the |
bucket-owner-full-control | Object is semi-private. Both the object owner and bucket owner are granted the Only relevant when specified for an object: this canned ACL is otherwise ignored when specified for a bucket. |
bucket-owner-read | Object is private, except to the bucket owner. The object owner is granted the Only relevant when specified for an object: this canned ACL is otherwise ignored when specified for a bucket. |
log-delivery-write | Bucket can have logs written. The Only relevant when specified for a bucket: this canned ACL is otherwise ignored when specified for an object. For more information about logs, see Amazon S3 Server Access Logging. |
private | Bucket/object are private. The bucket/object owner is granted the This is the default. |
public-read | Bucket/object can be read publicly. The bucket/object owner is granted the |
public-read-write | Bucket/object can be read and written publicly. The bucket/object owner is granted the This is generally not recommended. |
auth
optional objectauth.assume_role
required string literalauth.credentials_file
required string literalauth.imds
optional objectauth.imds.max_attempts
optional uint4
auth.load_timeout_secs
optional uintTimeout for successfully loading any credentials, in seconds.
Relevant when the default credentials chain or assume_role
is used.
auth.profile
optional string literalThe credentials profile to use.
Used to select AWS credentials from a provided credentials file.
default
auth.region
optional string literalThe AWS region to send STS requests to.
If not set, this defaults to the configured region for the service itself.
auth.secret_access_key
required string literalbatch
optional objectbatch.max_bytes
optional uintThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are serialized/compressed.
1e+07
(bytes)batch.max_events
optional uintbatch.timeout_secs
optional float300
(seconds)bucket
required string literalThe S3 bucket name.
This must not include a leading s3://
or a trailing /
.
buffer
optional objectConfigures the buffering behavior for this sink.
More information about the individual buffer types, and buffer behavior, can be found in the Buffering Model section.
buffer.max_events
optional uinttype = "memory"
500
buffer.max_size
required uintThe maximum size of the buffer on disk.
Must be at least ~256 megabytes (268435488 bytes).
type = "disk"
buffer.type
optional string literal enumOption | Description |
---|---|
disk | Events are buffered on disk. This is less performant, but more durable. Data that has been synchronized to disk will not be lost if Vector is restarted forcefully or crashes. Data is synchronized to disk every 500ms. |
memory | Events are buffered in memory. This is more performant, but less durable. Data will be lost if Vector is restarted forcefully or crashes. |
memory
buffer.when_full
optional string literal enumOption | Description |
---|---|
block | Wait for free space in the buffer. This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. |
drop_newest | Drops the event instead of waiting for free space in buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events. |
block
compression
optional string literal enumCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Some cloud storage API clients and browsers handle decompression transparently, so depending on how they are accessed, files may not always appear to be compressed.
gzip
content_encoding
optional string literalOverrides what content encoding has been applied to the object.
Directly comparable to the Content-Encoding
HTTP header.
If not specified, the compression scheme used dictates this value.
content_type
optional string literalOverrides the MIME type of the object.
Directly comparable to the Content-Type
HTTP header.
If not specified, the compression scheme used dictates this value.
When compression
is set to none
, the value text/x-log
is used.
encoding
required objectencoding.avro
required objectcodec = "avro"
encoding.avro.schema
required string literalencoding.codec
required string literal enumOption | Description |
---|---|
avro | Encodes an event as an Apache Avro message. |
csv | Encodes an event as a CSV message. This codec must be configured with fields to encode. |
gelf | Encodes an event as a GELF message. |
json | Encodes an event as JSON. |
logfmt | Encodes an event as a logfmt message. |
native | Encodes an event in the native Protocol Buffers format. This codec is experimental. |
native_json | Encodes an event in the native JSON format. This codec is experimental. |
raw_message | No encoding. This encoding uses the Be careful if you are modifying your log events (for example, by using a |
text | Plain text encoding. This encoding uses the Be careful if you are modifying your log events (for example, by using a |
encoding.csv
required objectcodec = "csv"
encoding.csv.fields
required [string]Configures the fields that will be encoded, as well as the order in which they appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
encoding.except_fields
optional [string]encoding.metric_tag_values
optional string literal enumControls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
codec = "json" or codec = "text"
Option | Description |
---|---|
full | All tags are exposed as arrays of either string or null values. |
single | Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored. |
single
encoding.only_fields
optional [string]encoding.timestamp_format
optional string literal enumOption | Description |
---|---|
rfc3339 | Represent the timestamp as a RFC 3339 timestamp. |
unix | Represent the timestamp as a Unix timestamp. |
endpoint
optional string literalfilename_append_uuid
optional boolWhether or not to append a UUID v4 token to the end of the object key.
The UUID is appended to the timestamp portion of the object key, such that if the object key
generated is date=2022-07-18/1658176486
, setting this field to true
results
in an object key that looks like date=2022-07-18/1658176486-30f6652c-71da-4f9f-800d-a1189c47c547
.
This ensures there are no name collisions, and can be useful in high-volume workloads where object keys must be unique.
true
filename_extension
optional string literalThe filename extension to use in the object key.
This overrides setting the extension based on the configured compression
.
filename_time_format
optional string literalThe timestamp format for the time component of the object key.
By default, object keys are appended with a timestamp that reflects when the objects are
sent to S3, such that the resulting object key is functionally equivalent to joining the key
prefix with the formatted timestamp, such as date=2022-07-18/1658176486
.
This would represent a key_prefix
set to date=%F/
and the timestamp of Mon Jul 18 2022
20:34:44 GMT+0000, with the filename_time_format
being set to %s
, which renders
timestamps in seconds since the Unix epoch.
Supports the common strftime
specifiers found in most
languages.
When set to an empty string, no timestamp is appended to the key prefix.
%s
framing
optional objectframing.character_delimited
required objectmethod = "character_delimited"
framing.character_delimited.delimiter
required uintframing.method
required string literal enumOption | Description |
---|---|
bytes | Event data is not delimited at all. |
character_delimited | Event data is delimited by a single ASCII (7-bit) character. |
length_delimited | Event data is prefixed with its length in bytes. The prefix is a 32-bit unsigned integer, little endian. |
newline_delimited | Event data is delimited by a newline (LF) character. |
grant_full_control
optional string literalGrants READ
, READ_ACP
, and WRITE_ACP
permissions on the created objects to the named grantee.
This allows the grantee to read the created objects and their metadata, as well as read and modify the ACL on the created objects.
grant_read
optional string literalGrants READ
permissions on the created objects to the named grantee.
This allows the grantee to read the created objects and their metadata.
grant_read_acp
optional string literalGrants READ_ACP
permissions on the created objects to the named grantee.
This allows the grantee to read the ACL on the created objects.
grant_write_acp
optional string literalGrants WRITE_ACP
permissions on the created objects to the named grantee.
This allows the grantee to modify the ACL on the created objects.
healthcheck
optional objecthealthcheck.enabled
optional booltrue
inputs
required [string]A list of upstream source or transform IDs.
Wildcards (*
) are supported.
See configuration for more info.
key_prefix
optional string templateA prefix to apply to all object keys.
Prefixes are useful for partitioning objects, such as by creating an object key that
stores objects under a particular directory. If using a prefix for this purpose, it must end
in /
to act as a directory path. A trailing /
is not automatically added.
date=%F
proxy
optional objectProxy configuration.
Configure to proxy traffic through an HTTP(S) proxy when making external requests.
Similar to common proxy configuration convention, you can set different proxies to use based on the type of traffic being proxied, as well as set specific hosts that should not be proxied.
proxy.http
optional string literalProxy endpoint to use when proxying HTTP traffic.
Must be a valid URI string.
proxy.https
optional string literalProxy endpoint to use when proxying HTTPS traffic.
Must be a valid URI string.
proxy.no_proxy
optional [string]A list of hosts to avoid proxying.
Multiple patterns are allowed:
Pattern | Example match |
---|---|
Domain names | example.com matches requests to example.com |
Wildcard domains | .example.com matches requests to example.com and its subdomains |
IP addresses | 127.0.0.1 matches requests to 127.0.0.1 |
CIDR blocks | 192.168.0.0/16 matches requests to any IP addresses in this range |
Splat | * matches all hosts |
request
optional objectMiddleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
request.adaptive_concurrency
optional objectConfiguration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
request.adaptive_concurrency.decrease_ratio
optional floatThe fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
0.9
request.adaptive_concurrency.ewma_alpha
optional floatThe weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
0.4
request.adaptive_concurrency.rtt_deviation_scale
optional floatScale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
2.5
request.concurrency
optional string literal enum uintOption | Description |
---|---|
adaptive | Concurrency will be managed by Vector’s Adaptive Request Concurrency feature. |
none | A fixed concurrency of 1. Only one request can be outstanding at any given time. |
none
request.rate_limit_duration_secs
optional uintrate_limit_num
option.1
(seconds)request.rate_limit_num
optional uintrate_limit_duration_secs
time window.9.223372036854776e+18
(requests)request.retry_attempts
optional uintThe maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
9.223372036854776e+18
(retries)request.retry_initial_backoff_secs
optional uintThe amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
1
(seconds)request.retry_max_duration_secs
optional uint3600
(seconds)request.timeout_secs
optional uintThe time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service’s internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
60
(seconds)server_side_encryption
optional string literal enumOption | Description |
---|---|
AES256 | Each object is encrypted with AES-256 using a unique key. This corresponds to the |
aws:kms | Each object is encrypted with AES-256 using keys managed by AWS KMS. Depending on whether or not a KMS key ID is specified, this corresponds either to the
|
ssekms_key_id
optional string templateSpecifies the ID of the AWS Key Management Service (AWS KMS) symmetrical customer managed customer master key (CMK) that is used for the created objects.
Only applies when server_side_encryption
is configured to use KMS.
If not specified, Amazon S3 uses the AWS managed CMK in AWS to protect the data.
storage_class
optional string literal enumThe storage class for the created objects.
See the S3 Storage Classes for more details.
Option | Description |
---|---|
DEEP_ARCHIVE | Glacier Deep Archive. |
GLACIER | Glacier Flexible Retrieval. |
INTELLIGENT_TIERING | Intelligent Tiering. |
ONEZONE_IA | Infrequently Accessed (single Availability zone). |
REDUCED_REDUNDANCY | Reduced Redundancy. |
STANDARD | Standard Redundancy. |
STANDARD_IA | Infrequently Accessed. |
STANDARD
tls
optional objecttls.alpn_protocols
optional [string]Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
tls.ca_file
optional string literalAbsolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
tls.crt_file
optional string literalAbsolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
tls.key_file
optional string literalAbsolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
tls.key_pass
optional string literalPassphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
tls.verify_certificate
optional boolEnables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
tls.verify_hostname
optional boolEnables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Environment variables
AWS_ACCESS_KEY_ID
common optional string literalAWS_CONFIG_FILE
common optional string literal~/.aws/config
AWS_DEFAULT_REGION
common optional string literalAWS_PROFILE
common optional string literaldefault
AWS_ROLE_SESSION_NAME
common optional string literalAWS_SECRET_ACCESS_KEY
common optional string literalAWS_SESSION_TOKEN
common optional string literalAWS_SHARED_CREDENTIALS_FILE
common optional string literal~/.aws/credentials
Telemetry
Metrics
linkbuffer_byte_size
gaugecomponent_id
instead. The value is the same as component_id
.buffer_discarded_events_total
countercomponent_id
instead. The value is the same as component_id
.buffer_events
gaugecomponent_id
instead. The value is the same as component_id
.buffer_received_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.buffer_received_events_total
countercomponent_id
instead. The value is the same as component_id
.buffer_sent_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.buffer_sent_events_total
countercomponent_id
instead. The value is the same as component_id
.component_discarded_events_total
countercomponent_id
instead. The value is the same as component_id
.component_errors_total
countercomponent_id
instead. The value is the same as component_id
.component_received_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_received_events_count
histogramA histogram of the number of events passed in each internal batch in Vector’s internal topology.
Note that this is separate than sink-level batching. It is mostly useful for low level debugging performance issues in Vector due to small internal batches.
component_id
instead. The value is the same as component_id
.component_received_events_total
countercomponent_id
instead. The value is the same as component_id
.component_sent_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_sent_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_sent_events_total
countercomponent_id
instead. The value is the same as component_id
.utilization
gaugecomponent_id
instead. The value is the same as component_id
.Permissions
Policy | Required for | Required when |
---|---|---|
s3:ListBucket |
| |
s3:PutObject |
|
How it works
AWS authentication
Vector checks for AWS credentials in the following order:
- The
auth.access_key_id
andauth.secret_access_key
options. - The
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
environment variables. - The AWS credentials file (usually located at
~/.aws/credentials
). - The IAM instance profile (only works if running on an EC2 instance with an instance profile/role). Requires IMDSv2 to be enabled. For EKS, you may need to increase the metadata token response hop limit to 2.
If no credentials are found, Vector’s health check fails and an error is logged. If your AWS credentials expire, Vector will automatically search for up-to-date credentials in the places (and order) described above.
Obtaining an access key
auth.access_key_id
and auth.secret_access_key
options.Assuming roles
auth.assume_role
option. This is an
optional setting that is helpful for a variety of use cases, such as cross
account access.Buffers and batches
This component buffers & batches data as shown in the diagram above. You’ll notice that Vector treats these concepts differently, instead of treating them as global concepts, Vector treats them as sink specific concepts. This isolates sinks, ensuring services disruptions are contained and delivery guarantees are honored.
Batches are flushed when 1 of 2 conditions are met:
- The batch age meets or exceeds the configured
timeout_secs
. - The batch size meets or exceeds the configured
max_bytes
ormax_events
.
Buffers are controlled via the buffer.*
options.
Cross account object writing
grant_full_control
option to the bucket owner’s
canonical user ID. AWS provides a
full tutorial for this use case. If
don’t know the bucket owner’s canonical ID you can find it by following
this tutorial.Health checks
Require health checks
If you’d like to exit immediately upon a health check failure, you can pass the
--require-healthy
flag:
vector --config /etc/vector/vector.toml --require-healthy
Disable health checks
healthcheck
option to
false
.Object Access Control List (ACL)
acl
, grant_full_control
, grant_read
, grant_read_acp
, or
grant_write_acp
options.acl.*
vs grant_*
options
grant_*
options name a specific entity to grant access to. The acl
options is one of a set of specific canned ACLs that
can only name the owner or world.Object naming
Vector uses two different naming schemes for S3 objects. If you set the
compression
parameter to true
(this is the default), Vector uses
this scheme:
<key_prefix><timestamp>-<uuidv4>.log.gz
If compression isn’t enabled, Vector uses this scheme (only the file extension is different):
<key_prefix><timestamp>-<uuidv4>.log
Some sample S3 object names (with and without compression, respectively):
date=2019-06-18/1560886634-fddd7a0e-fad9-4f7e-9bce-00ae5debc563.log.gz
date=2019-06-18/1560886634-fddd7a0e-fad9-4f7e-9bce-00ae5debc563.log
Vector appends a UUIDV4 token to ensure there are no naming conflicts in the unlikely event that two Vector instances are writing data at the same time.
You can control the resulting name via the key_prefix
,
filename_time_format
, and
filename_append_uuid
options.
For example, to store objects at the root S3 folder, without a timestamp or UUID use these configuration options:
key_prefix = "{{ my_file_name }}"
filename_time_format = ""
filename_append_uuid = false
Object Tags & metadata
Vector currently only supports AWS S3 object tags and does not support object metadata. If you require metadata support see issue #1694.
We believe tags are more flexible since they are separate from the actual S3 object. You can freely modify tags without modifying the object. Conversely, object metadata requires a full rewrite of the object to make changes.
Rate limits & adaptive concurrency
Adaptive Request Concurrency (ARC)
Adaptive Request Concurrency is a feature of Vector that does away with static concurrency limits and automatically optimizes HTTP concurrency based on downstream service responses. The underlying mechanism is a feedback loop inspired by TCP congestion control algorithms. Checkout the announcement blog post,
We highly recommend enabling this feature as it improves performance and reliability of Vector and the systems it communicates with. As such, we have made it the default, and no further configuration is required.
Static concurrency
If Adaptive Request Concurrency is not for you, you can manually set static concurrency
limits by specifying an integer for request.concurrency
:
[sinks.my-sink]
request.concurrency = 10
Rate limits
In addition to limiting request concurrency, you can also limit the overall request
throughput via the request.rate_limit_duration_secs
and request.rate_limit_num
options.
[sinks.my-sink]
request.rate_limit_duration_secs = 1
request.rate_limit_num = 10
These will apply to both adaptive
and fixed request.concurrency
values.
Retry policy
request.retry_attempts
and
request.retry_backoff_secs
options.Server-Side Encryption (SSE)
server_side_encryption
option.Storage class
storage_class
option.