AWS S3

Store observability events in the AWS S3 object storage system

status: stable delivery: at-least-once egress: batch state: stateless

Configuration

Example configurations

{
  "sinks": {
    "my_sink_id": {
      "type": "aws_s3",
      "inputs": [
        "my-source-or-transform-id"
      ],
      "bucket": "my-bucket",
      "key_prefix": "date=%F/",
      "compression": "gzip",
      "region": "us-east-1"
    }
  }
}
[sinks.my_sink_id]
type = "aws_s3"
inputs = [ "my-source-or-transform-id" ]
bucket = "my-bucket"
key_prefix = "date=%F/"
compression = "gzip"
region = "us-east-1"
---
sinks:
  my_sink_id:
    type: aws_s3
    inputs:
      - my-source-or-transform-id
    bucket: my-bucket
    key_prefix: date=%F/
    batch: null
    compression: gzip
    encoding: null
    healthcheck: null
    region: us-east-1
{
  "sinks": {
    "my_sink_id": {
      "type": "aws_s3",
      "inputs": [
        "my-source-or-transform-id"
      ],
      "endpoint": "127.0.0.0:5000/path/to/service",
      "acl": "private",
      "bucket": "my-bucket",
      "content_encoding": "gzip",
      "content_type": "text/x-log",
      "filename_append_uuid": true,
      "filename_extension": "log",
      "filename_time_format": "%s",
      "grant_full_control": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be",
      "grant_read": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be",
      "grant_read_acp": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be",
      "grant_write_acp": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be",
      "key_prefix": "date=%F/",
      "server_side_encryption": "AES256",
      "ssekms_key_id": "abcd1234",
      "storage_class": "STANDARD",
      "compression": "gzip",
      "region": "us-east-1"
    }
  }
}
[sinks.my_sink_id]
type = "aws_s3"
inputs = [ "my-source-or-transform-id" ]
endpoint = "127.0.0.0:5000/path/to/service"
acl = "private"
bucket = "my-bucket"
content_encoding = "gzip"
content_type = "text/x-log"
filename_append_uuid = true
filename_extension = "log"
filename_time_format = "%s"
grant_full_control = "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
grant_read = "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
grant_read_acp = "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
grant_write_acp = "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
key_prefix = "date=%F/"
server_side_encryption = "AES256"
ssekms_key_id = "abcd1234"
storage_class = "STANDARD"
compression = "gzip"
region = "us-east-1"
---
sinks:
  my_sink_id:
    type: aws_s3
    inputs:
      - my-source-or-transform-id
    auth: null
    endpoint: 127.0.0.0:5000/path/to/service
    acl: private
    bucket: my-bucket
    content_encoding: gzip
    content_type: text/x-log
    filename_append_uuid: true
    filename_extension: log
    filename_time_format: "%s"
    grant_full_control: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
    grant_read: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
    grant_read_acp: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
    grant_write_acp: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
    key_prefix: date=%F/
    server_side_encryption: AES256
    ssekms_key_id: abcd1234
    storage_class: STANDARD
    buffer: null
    batch: null
    compression: gzip
    encoding: null
    healthcheck: null
    request: null
    proxy: null
    region: us-east-1
    tags: null

acl

optional string literal enum
Canned ACL to apply to the created objects. For more information, see Canned ACL.
Enum options string literal
OptionDescription
authenticated-readOwner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
aws-exec-readOwner gets FULL_CONTROL. Amazon EC2 gets READ access to GET an Amazon Machine Image (AMI) bundle from Amazon S3.
bucket-owner-full-controlBoth the object owner and the bucket owner get FULL_CONTROL over the object.
bucket-owner-readObject owner gets FULL_CONTROL. Bucket owner gets `READ. access.
log-delivery-writeThe LogDelivery group gets WRITE and READ_ACP permissions on the bucket. For more information about logs, see Amazon S3 Server Access Logging.
privateOwner gets FULL_CONTROL. No one else has access rights (default).
public-readOwner gets FULL_CONTROL. The AllUsers group gets READ access.
public-read-writeOwner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. Granting this on a bucket is generally not recommended.

auth

optional object
Options for the authentication strategy.

auth.access_key_id

optional string literal
The AWS access key id. Used for AWS authentication when communicating with AWS services.
Examples
"AKIAIOSFODNN7EXAMPLE"

auth.assume_role

optional string literal
The ARN of an IAM role to assume at startup.
Examples
"arn:aws:iam::123456789098:role/my_role"

auth.credentials_file

optional string literal
The path to AWS credentials file. Used for AWS authentication when communicating with AWS services.
Examples
"/path/to/aws/credentials"

auth.profile

optional string literal
The AWS profile name. Used to select AWS credentials from a provided credentials file.
Examples
"develop"
default: default

auth.secret_access_key

optional string literal
The AWS secret access key. Used for AWS authentication when communicating with AWS services.
Examples
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"

batch

common optional object
Configures the sink batching behavior.

batch.max_bytes

optional uint
The maximum size of a batch, in bytes, before it is flushed.
default: 1e+07 (bytes)

batch.timeout_secs

optional uint
The maximum age of a batch before it is flushed.
default: 300 (seconds)

bucket

required string literal
The S3 bucket name. Do not include a leading s3:// or a trailing /.
Examples
"my-bucket"

buffer

optional object
Configures the sink specific buffer behavior.

buffer.max_events

optional uint
The maximum number of events allowed in the buffer.
Relevant when: type = "memory"
default: 500 (events)

buffer.max_size

optional uint
The maximum size of the buffer on the disk.
Relevant when: type = "disk"
Examples
104900000

buffer.type

optional string literal enum
The buffer’s type and storage mechanism.
Enum options
OptionDescription
diskStores the sink’s buffer on disk. This is less performant, but durable. Data will not be lost between restarts. Will also hold data in memory to enhance performance. WARNING: This may stall the sink if disk performance isn’t on par with the throughput. For comparison, AWS gp2 volumes are usually too slow for common cases.
memoryStores the sink’s buffer in memory. This is more performant, but less durable. Data will be lost if Vector is restarted forcefully.
default: memory

buffer.when_full

optional string literal enum
The behavior when the buffer becomes full.
Enum options
OptionDescription
blockApplies back pressure when the buffer is full. This prevents data loss, but will cause data to pile up on the edge.
drop_newestDrops new data as it’s received. This data is lost. This should be used when performance is the highest priority.
default: block

compression

common optional string literal enum

The compression strategy used to compress the encoded event data before transmission.

Some cloud storage API clients and browsers will handle decompression transparently, so files may not always appear to be compressed depending how they are accessed.

Enum options string literal
OptionDescription
gzipGzip standard DEFLATE compression.
noneNo compression.
syntaxliteral
default: gzip

content_encoding

optional string literal
Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. By default calculated from compression value.
Examples
"gzip"

content_type

optional string literal
A standard MIME type describing the format of the contents.
default: text/x-log

encoding

common optional object
Configures the encoding specific sink behavior.

encoding.codec

optional string literal enum
The encoding codec used to serialize the events before outputting.
Enum options
OptionDescription
ndjsonNewline delimited list of JSON encoded events.
textNewline delimited list of messages generated from the message key from each event.
Examples
"ndjson"
"text"

encoding.except_fields

optional [string]
Prevent the sink from encoding the specified fields.

encoding.only_fields

optional [string]
Makes the sink encode only the specified fields.

encoding.timestamp_format

optional string literal enum
How to format event timestamps.
Enum options
OptionDescription
rfc3339Formats as a RFC3339 string
unixFormats as a unix timestamp
default: rfc3339

endpoint

optional string literal
Custom endpoint for use with AWS-compatible services. Providing a value for this option will make region moot.
Examples
"127.0.0.0:5000/path/to/service"
Relevant when: region = null

filename_append_uuid

optional bool
Whether or not to append a UUID v4 token to the end of the file. This ensures there are no name collisions high volume use cases.
default: true

filename_extension

optional string literal
The filename extension to use in the object name.
default: log

filename_time_format

optional string strftime
The format of the resulting object file name. strftime specifiers are supported.
default: %s

grant_full_control

optional string literal
Gives the named grantee READ, READ_ACP, and WRITE_ACP permissions on the created objects.
Examples
"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
"person@email.com"
"http://acs.amazonaws.com/groups/global/AllUsers"

grant_read

optional string literal
Allows the named grantee to read the created objects and their metadata.
Examples
"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
"person@email.com"
"http://acs.amazonaws.com/groups/global/AllUsers"

grant_read_acp

optional string literal
Allows the named grantee to read the created objects' ACL.
Examples
"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
"person@email.com"
"http://acs.amazonaws.com/groups/global/AllUsers"

grant_write_acp

optional string literal
Allows the named grantee to write the created objects' ACL.
Examples
"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
"person@email.com"
"http://acs.amazonaws.com/groups/global/AllUsers"

healthcheck

common optional object
Health check options for the sink.

healthcheck.enabled

optional bool
Enables/disables the healthcheck upon Vector boot.
default: true

inputs

required [string]

A list of upstream source or transform IDs. Wildcards (*) are supported.

See configuration for more info.

Array string literal
Examples
[
  "my-source-or-transform-id",
  "prefix-*"
]

key_prefix

common optional string template
A prefix to apply to all object key names. This should be used to partition your objects, and it’s important to end this value with a / if you want this to be the root S3 “folder”.
Note: This parameter supports Vector's template syntax, which enables you to use dynamic per-event values.
Examples
"date=%F/"
"date=%F/hour=%H/"
"year=%Y/month=%m/day=%d/"
"application_id={{ application_id }}/date=%F/"
default: date=%F/

proxy

optional object
Configures an HTTP(S) proxy for Vector to use. By default, the globally configured proxy is used.

proxy.enabled

optional bool
If false the proxy will be disabled.
default: true

proxy.http

optional string literal
The URL to proxy HTTP requests through.
Examples
"http://foo.bar:3128"

proxy.https

optional string literal
The URL to proxy HTTPS requests through.
Examples
"http://foo.bar:3128"

proxy.no_proxy

optional [string]

A list of hosts to avoid proxying. Allowed patterns here include:

PatternExample match
Domain namesexample.com matches requests to example.com
Wildcard domains.example.com matches requests to example.com and its subdomains
IP addresses127.0.0.1 matches requests to 127.0.0.1
CIDR blocks192.168.0.0./16 matches requests to any IP addresses in this range
Splat* matches all hosts

region

required string literal
The AWS region of the target service. If endpoint is provided it will override this value since the endpoint includes the region.
Examples
"us-east-1"
Relevant when: endpoint = null

request

optional object
Configures the sink request behavior.
Configure the adaptive concurrency algorithms. These values have been tuned by optimizing simulated results. In general you should not need to adjust these.
The fraction of the current value to set the new concurrency limit when decreasing the limit. Valid values are greater than 0 and less than 1. Smaller values cause the algorithm to scale back rapidly when latency increases. Note that the new limit is rounded down after applying this ratio.
default: 0.9
The adaptive concurrency algorithm uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. This value controls how heavily new measurements are weighted compared to older ones. Valid values are greater than 0 and less than 1. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.7
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Valid values are greater than or equal to 0, and we expect reasonable values to range from 1.0 to 3.0. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2

request.concurrency

optional uint
The maximum number of in-flight requests allowed at any given time, or “adaptive” to allow Vector to automatically set the limit based on current network and service conditions.
The time window, in seconds, used for the rate_limit_num option.
default: 1 (seconds)
The maximum number of requests allowed within the rate_limit_duration_secs time window.
default: 250
The maximum number of retries to make for failed requests. The default, for all intents and purposes, represents an infinite number of retries.
default: 1.8446744073709552e+19
The amount of time to wait before attempting the first retry for a failed request. Once, the first retry has failed the fibonacci sequence will be used to select future backoffs.
default: 1 (seconds)
The maximum amount of time, in seconds, to wait between retries.
default: 3600 (seconds)
The maximum time a request can take before being aborted. It is highly recommended that you do not lower this value below the service’s internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 (seconds)

server_side_encryption

optional string literal enum
The Server-side Encryption algorithm used when storing these objects.
Enum options string literal
OptionDescription
AES256256-bit Advanced Encryption Standard
aws:kmsAWS managed key encryption

ssekms_key_id

optional string literal
If server_side_encryption has the value "aws.kms", this specifies the ID of the AWS Key Management Service (AWS KMS) symmetrical customer managed customer master key (CMK) that will used for the created objects. If not specified, Amazon S3 uses the AWS managed CMK in AWS to protect the data.
Examples
"abcd1234"

storage_class

optional string literal enum
The storage class for the created objects. See the S3 Storage Classes for more details.
Enum options string literal
OptionDescription
DEEP_ARCHIVEUse for archiving data that rarely needs to be accessed.
GLACIERUse for archives where portions of the data might need to be retrieved in minutes.
INTELLIGENT_TIERINGStores objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequently accessed data.
ONEZONE_IAAmazon S3 stores the object data in only one Availability Zone.
REDUCED_REDUNDANCYDesigned for noncritical, reproducible data that can be stored with less redundancy than the STANDARD storage class. AWS recommends that you not use this storage class. The STANDARD storage class is more cost effective.
STANDARDThe default storage class. If you don’t specify the storage class when you upload an object, Amazon S3 assigns the STANDARD storage class.
STANDARD_IAAmazon S3 stores the object data redundantly across multiple geographically separated Availability Zones (similar to the STANDARD storage class).

tags

optional object
The tag-set for the object.

Environment variables

AWS_ACCESS_KEY_ID

common optional string literal
The AWS access key id. Used for AWS authentication when communicating with AWS services.
Examples
AKIAIOSFODNN7EXAMPLE

AWS_CONFIG_FILE

common optional string literal
Specifies the location of the file that the AWS CLI uses to store configuration profiles.
Default: ~/.aws/config

AWS_CREDENTIAL_EXPIRATION

common optional string literal
Expiration time in RFC 3339 format. If unset, credentials won’t expire.
Examples
1996-12-19T16:39:57-08:00

AWS_DEFAULT_REGION

common optional string literal
The default AWS region.
Examples
/path/to/credentials.json

AWS_PROFILE

common optional string literal
Specifies the name of the CLI profile with the credentials and options to use. This can be the name of a profile stored in a credentials or config file.
Default: default
Examples
my-custom-profile

AWS_ROLE_SESSION_NAME

common optional string literal
Specifies a name to associate with the role session. This value appears in CloudTrail logs for commands performed by the user of this profile.
Examples
vector-session

AWS_SECRET_ACCESS_KEY

common optional string literal
The AWS secret access key. Used for AWS authentication when communicating with AWS services.
Examples
wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

AWS_SESSION_TOKEN

common optional string literal
The AWS session token. Used for AWS authentication when communicating with AWS services.
Examples
AQoEXAMPLEH4aoAH0gNCAPy...truncated...zrkuWJOgQs8IZZaIv2BXIa2R4Olgk

AWS_SHARED_CREDENTIALS_FILE

common optional string literal
Specifies the location of the file that the AWS CLI uses to store access keys.
Default: ~/.aws/credentials

Telemetry

Metrics

link

buffer_byte_size

gauge
The number of bytes current in the buffer.
component_id required
The Vector component ID.
component_kind required
The Vector component kind.
component_name required
Deprecated, use component_id instead. The value is the same as component_id.
component_type required
The Vector component type.
host required
The hostname of the system Vector is running on.
pid required
The process ID of the Vector instance.

buffer_discarded_events_total

counter
The number of events dropped by this non-blocking buffer.
component_id required
The Vector component ID.
component_kind required
The Vector component kind.
component_name required
Deprecated, use component_id instead. The value is the same as component_id.
component_type required
The Vector component type.
host required
The hostname of the system Vector is running on.
pid required
The process ID of the Vector instance.

buffer_events

gauge
The number of events currently in the buffer.
component_id required
The Vector component ID.
component_kind required
The Vector component kind.
component_name required
Deprecated, use component_id instead. The value is the same as component_id.
component_type required
The Vector component type.
host required
The hostname of the system Vector is running on.
pid required
The process ID of the Vector instance.

buffer_received_event_bytes_total

counter
The number of bytes received by this buffer.
component_id required
The Vector component ID.
component_kind required
The Vector component kind.
component_name required
Deprecated, use component_id instead. The value is the same as component_id.
component_type required
The Vector component type.
host required
The hostname of the system Vector is running on.
pid required
The process ID of the Vector instance.

buffer_received_events_total

counter
The number of events received by this buffer.
component_id required
The Vector component ID.
component_kind required
The Vector component kind.
component_name required
Deprecated, use component_id instead. The value is the same as component_id.
component_type required
The Vector component type.
host required
The hostname of the system Vector is running on.
pid required
The process ID of the Vector instance.

buffer_sent_event_bytes_total

counter
The number of bytes sent by this buffer.
component_id required
The Vector component ID.
component_kind required
The Vector component kind.
component_name required
Deprecated, use component_id instead. The value is the same as component_id.
component_type required
The Vector component type.
host required
The hostname of the system Vector is running on.
pid required
The process ID of the Vector instance.

buffer_sent_events_total

counter
The number of events sent by this buffer.
component_id required
The Vector component ID.
component_kind required
The Vector component kind.
component_name required
Deprecated, use component_id instead. The value is the same as component_id.
component_type required
The Vector component type.
host required
The hostname of the system Vector is running on.
pid required
The process ID of the Vector instance.

component_received_event_bytes_total

counter
The number of event bytes accepted by this component either from tagged origins like file and uri, or cumulatively from other origins.
component_id required
The Vector component ID.
component_kind required
The Vector component kind.
component_name required
Deprecated, use component_id instead. The value is the same as component_id.
component_type required
The Vector component type.
container_name optional
The name of the container from which the data originated.
file optional
The file from which the data originated.
host required
The hostname of the system Vector is running on.
mode optional
The connection mode used by the component.
peer_addr optional
The IP from which the data originated.
peer_path optional
The pathname from which the data originated.
pid required
The process ID of the Vector instance.
pod_name optional
The name of the pod from which the data originated.
uri optional
The sanitized URI from which the data originated.

component_received_events_total

counter
The number of events accepted by this component either from tagged origins like file and uri, or cumulatively from other origins.
component_id required
The Vector component ID.
component_kind required
The Vector component kind.
component_name required
Deprecated, use component_id instead. The value is the same as component_id.
component_type required
The Vector component type.
container_name optional
The name of the container from which the data originated.
file optional
The file from which the data originated.
host required
The hostname of the system Vector is running on.
mode optional
The connection mode used by the component.
peer_addr optional
The IP from which the data originated.
peer_path optional
The pathname from which the data originated.
pid required
The process ID of the Vector instance.
pod_name optional
The name of the pod from which the data originated.
uri optional
The sanitized URI from which the data originated.

component_sent_bytes_total

counter
The number of raw bytes sent by this component to destination sinks.
component_id required
The Vector component ID.
component_kind required
The Vector component kind.
component_name required
Deprecated, use component_id instead. The value is the same as component_id.
component_type required
The Vector component type.
endpoint optional
The endpoint to which the bytes were sent. For HTTP, this will be the host and path only, excluding the query string.
file optional
The absolute path of the destination file.
host required
The hostname of the system Vector is running on.
pid required
The process ID of the Vector instance.
protocol required
The protocol used to send the bytes.
region optional
The AWS region name to which the bytes were sent. In some configurations, this may be a literal hostname.

component_sent_event_bytes_total

counter
The total number of event bytes emitted by this component.
component_id required
The Vector component ID.
component_kind required
The Vector component kind.
component_name required
Deprecated, use component_id instead. The value is the same as component_id.
component_type required
The Vector component type.
host required
The hostname of the system Vector is running on.
pid required
The process ID of the Vector instance.

component_sent_events_total

counter
The total number of events emitted by this component.
component_id required
The Vector component ID.
component_kind required
The Vector component kind.
component_name required
Deprecated, use component_id instead. The value is the same as component_id.
component_type required
The Vector component type.
host required
The hostname of the system Vector is running on.
pid required
The process ID of the Vector instance.

events_discarded_total

counter
The total number of events discarded by this component.
host required
The hostname of the system Vector is running on.
pid required
The process ID of the Vector instance.
reason required
The type of the error

events_in_total

counter
The number of events accepted by this component either from tagged origins like file and uri, or cumulatively from other origins. This metric is deprecated and will be removed in a future version. Use component_received_events_total instead.
component_id required
The Vector component ID.
component_kind required
The Vector component kind.
component_name required
Deprecated, use component_id instead. The value is the same as component_id.
component_type required
The Vector component type.
container_name optional
The name of the container from which the data originated.
file optional
The file from which the data originated.
host required
The hostname of the system Vector is running on.
mode optional
The connection mode used by the component.
peer_addr optional
The IP from which the data originated.
peer_path optional
The pathname from which the data originated.
pid required
The process ID of the Vector instance.
pod_name optional
The name of the pod from which the data originated.
uri optional
The sanitized URI from which the data originated.

processing_errors_total

counter
The total number of processing errors encountered by this component.
component_id required
The Vector component ID.
component_kind required
The Vector component kind.
component_name required
Deprecated, use component_id instead. The value is the same as component_id.
component_type required
The Vector component type.
error_type required
The type of the error
host required
The hostname of the system Vector is running on.
pid required
The process ID of the Vector instance.

utilization

gauge
A ratio from 0 to 1 of the load on a component. A value of 0 would indicate a completely idle component that is simply waiting for input. A value of 1 would indicate a that is never idle. This value is updated every 5 seconds.
component_id required
The Vector component ID.
component_kind required
The Vector component kind.
component_name required
Deprecated, use component_id instead. The value is the same as component_id.
component_type required
The Vector component type.
host required
The hostname of the system Vector is running on.
pid required
The process ID of the Vector instance.

Permissions

Platform: Amazon Web Services
Relevant policies
PolicyRequired forRequired when
s3:HeadBucket
  • healthcheck
s3:PutObject
  • operation

How it works

AWS authentication

Vector checks for AWS credentials in the following order:

  1. The access_key_id and secret_access_key options.
  2. The AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.
  3. The credential_process command in the AWS config file (usually located at ~/.aws/config).
  4. The AWS credentials file (usually located at ~/.aws/credentials).
  5. The IAM instance profile (only works if running on an EC2 instance with an instance profile/role).

If no credentials are found, Vector’s health check fails and an error is logged. If your AWS credentials expire, Vector will automatically search for up-to-date credentials in the places (and order) described above.

Obtaining an access key

In general, we recommend using instance profiles/roles whenever possible. In cases where this is not possible you can generate an AWS access key for any user within your AWS account. AWS provides a detailed guide on how to do this. Such created AWS access keys can be used via access_key_id and secret_access_key options.

Assuming roles

Vector can assume an AWS IAM role via the assume_role option. This is an optional setting that is helpful for a variety of use cases, such as cross account access.

Buffers and batches

This component buffers & batches data as shown in the diagram above. You’ll notice that Vector treats these concepts differently, instead of treating them as global concepts, Vector treats them as sink specific concepts. This isolates sinks, ensuring services disruptions are contained and delivery guarantees are honored.

Batches are flushed when 1 of 2 conditions are met:

  1. The batch age meets or exceeds the configured timeout_secs.
  2. The batch size meets or exceeds the configured max_bytes or max_events.

Buffers are controlled via the buffer.* options.

Cross account object writing

If you’re using Vector to write objects across AWS accounts then you should consider setting the grant_full_control option to the bucket owner’s canonical user ID. AWS provides a full tutorial for this use case. If don’t know the bucket owner’s canonical ID you can find it by following this tutorial.

Health checks

Health checks ensure that the downstream service is accessible and ready to accept data. This check is performed upon sink initialization. If the health check fails an error will be logged and Vector will proceed to start.

Require health checks

If you’d like to exit immediately upon a health check failure, you can pass the --require-healthy flag:

vector --config /etc/vector/vector.toml --require-healthy

Disable health checks

If you’d like to disable health checks for this sink you can set the healthcheck option to false.

Object Access Control List (ACL)

AWS S3 supports access control lists (ACL) for buckets and objects. In the context of Vector, only object ACLs are relevant (Vector does not create or modify buckets). You can set the object level ACL by using one of the acl, grant_full_control, grant_read, grant_read_acp, or grant_write_acp options.

acl.* vs grant_* options

The grant_* options name a specific entity to grant access to. The acl options is one of a set of specific canned ACLs that can only name the owner or world.

Object naming

Vector uses two different naming schemes for S3 objects. If you set the compression parameter to true (this is the default), Vector uses this scheme:

<key_prefix><timestamp>-<uuidv4>.log.gz

If compression isn’t enabled, Vector uses this scheme (only the file extension is different):

<key_prefix><timestamp>-<uuidv4>.log

Some sample S3 object names (with and without compression, respectively):

date=2019-06-18/1560886634-fddd7a0e-fad9-4f7e-9bce-00ae5debc563.log.gz
date=2019-06-18/1560886634-fddd7a0e-fad9-4f7e-9bce-00ae5debc563.log

Vector appends a UUIDV4 token to ensure there are no naming conflicts in the unlikely event that two Vector instances are writing data at the same time.

You can control the resulting name via the key_prefix, filename_time_format, and filename_append_uuid options.

Object Tags & metadata

Vector currently only supports AWS S3 object tags and does not support object metadata. If you require metadata support see issue #1694.

We believe tags are more flexible since they are separate from the actual S3 object. You can freely modify tags without modifying the object. Conversely, object metadata requires a full rewrite of the object to make changes.

Partitioning

Vector supports dynamic configuration values through a simple template syntax. If an option supports templating, it will be noted with a badge and you can use event fields to create dynamic values. For example:

[sinks.my-sink]
dynamic_option = "application={{ application_id }}"

In the above example, the application_id for each event will be used to partition outgoing data.

Rate limits & adapative concurrency

Adaptive Request Concurrency (ARC)

Adaptive Request Concurrency is a feature of Vector that does away with static concurrency limits and automatically optimizes HTTP concurrency based on downstream service responses. The underlying mechanism is a feedback loop inspired by TCP congestion control algorithms. Checkout the announcement blog post,

We highly recommend enabling this feature as it improves performance and reliability of Vector and the systems it communicates with. As such, we have made it the default, and no further configuration is required.

Static concurrency

If Adaptive Request Concurrency is not for you, you can manually set static concurrency limits by specifying an integer for request.concurrency:

[sinks.my-sink]
  request.concurrency = 10

Rate limits

In addition to limiting request concurrency, you can also limit the overall request throughput via the request.rate_limit_duration_secs and request.rate_limit_num options.

[sinks.my-sink]
  request.rate_limit_duration_secs = 1
  request.rate_limit_num = 10

These will apply to both adaptive and fixed request.concurrency values.

Retry policy

Vector will retry failed requests (status == 429, >= 500, and != 501). Other responses will not be retried. You can control the number of retry attempts and backoff rate with the request.retry_attempts and request.retry_backoff_secs options.

Server-Side Encryption (SSE)

AWS S3 offers server-side encryption. You can apply defaults at the bucket level or set the encryption at the object level. In the context, of Vector only the object level is relevant (Vector does not create or modify buckets). Although, we recommend setting defaults at the bucket level when possible. You can explicitly set the object level encryption via the server_side_encryption option.

State

This component is stateless, meaning its behavior is consistent across each input.

Storage class

AWS S3 offers storage classes. You can apply defaults, and rules, at the bucket level or set the storage class at the object level. In the context of Vector only the object level is relevant (Vector does not create or modify buckets). You can set the storage class via the storage_class option.