Austin S. Hemmelgarn 983a26d1a2 Revert "Revert changes since v1.21 in pereparation for hotfix release." 4 years ago
..
Makefile.am d79bbbf943 Add an AWS Kinesis connector to the exporting engine (#8145) 5 years ago
README.md 9342704a41 Bulk add frontmatter to all documentation (#8354) 5 years ago
aws_kinesis.c 983a26d1a2 Revert "Revert changes since v1.21 in pereparation for hotfix release." 4 years ago
aws_kinesis.h d79bbbf943 Add an AWS Kinesis connector to the exporting engine (#8145) 5 years ago
aws_kinesis_put_record.cc d79bbbf943 Add an AWS Kinesis connector to the exporting engine (#8145) 5 years ago
aws_kinesis_put_record.h d79bbbf943 Add an AWS Kinesis connector to the exporting engine (#8145) 5 years ago

README.md

Export metrics to AWS Kinesis Data Streams

Prerequisites

To use AWS Kinesis for metric collecting and processing, you should first install AWS SDK for C++. Netdata works with the SDK version 1.7.121. Other versions might work correctly as well, but they were not tested with Netdata. libcrypto, libssl, and libcurl are also required to compile Netdata with Kinesis support enabled. Next, Netdata should be re-installed from the source. The installer will detect that the required libraries are now available.

If the AWS SDK for C++ is being installed from source, it is useful to set -DBUILD_ONLY="kinesis". Otherwise, the building process could take a very long time. Note that the default installation path for the libraries is /usr/local/lib64. Many Linux distributions don't include this path as the default one for a library search, so it is advisable to use the following options to cmake while building the AWS SDK:

cmake -DCMAKE_INSTALL_LIBDIR=/usr/lib -DCMAKE_INSTALL_INCLUDEDIR=/usr/include -DBUILD_SHARED_LIBS=OFF -DBUILD_ONLY=kinesis <aws-sdk-cpp sources>

Configuration

To enable data sending to the Kinesis service, run ./edit-config exporting.conf in the Netdata configuration directory and set the following options:

[kinesis:my_instance]
    enabled = yes
    destination = us-east-1

Set the destination option to an AWS region.

Set AWS credentials and stream name:

    # AWS credentials
    aws_access_key_id = your_access_key_id
    aws_secret_access_key = your_secret_access_key
    # destination stream
    stream name = your_stream_name

Alternatively, you can set AWS credentials for the netdata user using AWS SDK for C++ standard methods.

Netdata automatically computes a partition key for every record with the purpose to distribute records across available shards evenly.

[analytics](<>)