Browse Source

Change "netdata" to "Netdata" in all docs (#6621)

* First pass of changing netdata to Netdata

* Second pass of netdata -> Netdata

* Starting work on netdata with no whitespace after

* Pass for netdata with no whitespace at the end

* Pass for netdata with no whitespace at the front
Joel Hans 5 years ago
parent
commit
a726c905bd

+ 2 - 2
.travis/README.md

@@ -54,7 +54,7 @@ At this stage, basically, we build :-)
 We do a baseline check of our build artifacts to guarantee they are not broken
 Briefly our activities include:
 - Verify docker builds successfully
-- Run the standard netdata installer, to make sure we build & run properly
+- Run the standard Netdata installer, to make sure we build & run properly
 - Do the same through 'make dist', as this is our stable channel for our kickstart files
 
 ## Artifacts validation
@@ -66,7 +66,7 @@ Briefly we currently evaluate the following activities:
 - Basic software unit testing
 - Non containerized build and install on ubuntu 14.04
 - Non containerized build and install on ubuntu 18.04
-- Running the full netdata lifecycle (install, update, uninstall) on ubuntu 18.04
+- Running the full Netdata lifecycle (install, update, uninstall) on ubuntu 18.04
 - Build and install on CentOS 6
 - Build and install on CentOS 7
 (More to come)

+ 9 - 9
CONTRIBUTING.md

@@ -15,15 +15,15 @@ This is the minimum open-source users should contribute back to the projects the
 
 ### Spread the word
 
-Community growth allows the project to attract new talent willing to contribute. This talent is then developing new features and improves the project. These new features and improvements attract more users and so on. It is a loop. So, post about netdata, present it to local meetups you attend, let your online social network or twitter, facebook, reddit, etc. know you are using it. **The more people involved, the faster the project evolves**.
+Community growth allows the project to attract new talent willing to contribute. This talent is then developing new features and improves the project. These new features and improvements attract more users and so on. It is a loop. So, post about Netdata, present it to local meetups you attend, let your online social network or twitter, facebook, reddit, etc. know you are using it. **The more people involved, the faster the project evolves**.
 
 ### Provide feedback
 
-Is there anything that bothers you about netdata? Did you experience an issue while installing it or using it? Would you like to see it evolve to you need? Let us know. [Open a github issue](https://github.com/netdata/netdata/issues) to discuss it. Feedback is very important for open-source projects. We can't commit we will do everything, but your feedback influences our road-map significantly. **We rely on your feedback to make Netdata better**.
+Is there anything that bothers you about Netdata? Did you experience an issue while installing it or using it? Would you like to see it evolve to you need? Let us know. [Open a github issue](https://github.com/netdata/netdata/issues) to discuss it. Feedback is very important for open-source projects. We can't commit we will do everything, but your feedback influences our road-map significantly. **We rely on your feedback to make Netdata better**.
 
 ### Translate some documentation
 
-The [netdata localization project](https://github.com/netdata/localization) contains instructions on how to provide translations for parts of our documentation. Translating the entire documentation is a daunting task, but you can contribute as much as you like, even a single file. The Chinese translation effort has already begun and we are looking forward to more contributions.
+The [Netdata localization project](https://github.com/netdata/localization) contains instructions on how to provide translations for parts of our documentation. Translating the entire documentation is a daunting task, but you can contribute as much as you like, even a single file. The Chinese translation effort has already begun and we are looking forward to more contributions.
 
 ### Sponsor a part of Netdata
 
@@ -57,7 +57,7 @@ Netdata delivers alarms via various [notification methods](health/notifications)
 
 ### Help other users
 
-As the project grows, an increasing share of our time is spent on supporting this community of users in terms of answering questions, of helping users understand how netdata works and find their way with it. Helping other users is crucial. It allows the developers and maintainers of the project to focus on improving it.
+As the project grows, an increasing share of our time is spent on supporting this community of users in terms of answering questions, of helping users understand how Netdata works and find their way with it. Helping other users is crucial. It allows the developers and maintainers of the project to focus on improving it.
 
 ### Improve documentation
 
@@ -80,11 +80,11 @@ Of course we appreciate contributions for any other part of the NetData agent, i
 
 #### Code of Conduct and CLA
 
-We expect all contributors to abide by the [Contributor Covenant Code of Conduct](CODE_OF_CONDUCT.md). For a pull request to be accepted, you will also need to accept the [netdata contributors license agreement](CONTRIBUTORS.md), as part of the PR process.
+We expect all contributors to abide by the [Contributor Covenant Code of Conduct](CODE_OF_CONDUCT.md). For a pull request to be accepted, you will also need to accept the [Netdata contributors license agreement](CONTRIBUTORS.md), as part of the PR process.
 
 #### Performance and efficiency
 
-Everything on Netdata is about efficiency. We need netdata to always be the most lightweight monitoring solution available. We will reject to merge PRs that are not optimal in resource utilization and efficiency.
+Everything on Netdata is about efficiency. We need Netdata to always be the most lightweight monitoring solution available. We will reject to merge PRs that are not optimal in resource utilization and efficiency.
 
 Of course there are cases that such technical excellence is either not reasonable or not feasible. In these cases, we may require the feature or code submitted to be by disabled by default.
 
@@ -92,9 +92,9 @@ Of course there are cases that such technical excellence is either not reasonabl
 
 Unlike other monitoring solutions, Netdata requires all metrics collected to have some structure attached to them. So, Netdata metrics have a name, units, belong to a chart that has a title, a family, a context, belong to an application, etc.
 
-This structure is what makes netdata different. Most other monitoring solution collect bulk metrics in terms of name-value pairs and then expect their users to give meaning to these metrics during visualization. This does not work. It is neither practical nor reasonable to give to someone 2000 metrics and let him/her visualize them in a meaningful way.
+This structure is what makes Netdata different. Most other monitoring solution collect bulk metrics in terms of name-value pairs and then expect their users to give meaning to these metrics during visualization. This does not work. It is neither practical nor reasonable to give to someone 2000 metrics and let him/her visualize them in a meaningful way.
 
-So, netdata requires all metrics to have a meaning at the time they are collected.  We will reject to merge PRs that loosely collect just a "bunch of metrics", but we are very keen to help you fix this.
+So, Netdata requires all metrics to have a meaning at the time they are collected.  We will reject to merge PRs that loosely collect just a "bunch of metrics", but we are very keen to help you fix this.
 
 #### Automated Testing
 
@@ -106,7 +106,7 @@ Of course, manual testing is always required.
 
 #### Netdata is a distributed application
 
-Netdata is a distributed monitoring application. A few basic features can become quite complicated for such applications. We may reject features that alter or influence the nature of netdata, though we usually discuss the requirements with contributors and help them adapt their code to be better suited for Netdata.
+Netdata is a distributed monitoring application. A few basic features can become quite complicated for such applications. We may reject features that alter or influence the nature of Netdata, though we usually discuss the requirements with contributors and help them adapt their code to be better suited for Netdata.
 
 #### Operating systems supported
 

+ 11 - 11
CONTRIBUTORS.md

@@ -2,9 +2,9 @@
 SPDX-License-Identifier: GPL-3.0-or-later
 -->
 
-# netdata contributors license agreement
+# Netdata contributors license agreement
 
-**Thank you for contributing to netdata!**
+**Thank you for contributing to Netdata!**
 
 This agreement is part of the legal framework of the open-source ecosystem
 that adds some red tape, but protects both the contributor and the project.
@@ -17,22 +17,22 @@ contributions for any other purpose.
 
 ## copyright license
 
-The Contributor (*you*) grants netdata Inc. a perpetual, worldwide, non-exclusive,
+The Contributor (*you*) grants Netdata Inc. a perpetual, worldwide, non-exclusive,
 no-charge, royalty-free, irrevocable copyright license to reproduce,
 prepare derivative works of, publicly display, publicly perform, sublicense,
 and distribute his contributions and such derivative works.
 
 ## copyright transfer
 
-The Contributor (*you*) hereby assigns netdata Inc. copyright in his
+The Contributor (*you*) hereby assigns Netdata Inc. copyright in his
 contributions, to be licensed under the same terms as the rest of the code.
 
-> *Note: this means we may re-license netdata (your contributions included)
+> *Note: this means we may re-license Netdata (your contributions included)
 > any way we see fit, without asking your permission. 
-> We intend to keep the netdata agent forever FOSS.
+> We intend to keep the Netdata agent forever FOSS.
 > But open-source licenses have significant differences and in our attempt to
-> help netdata grow we may have to distribute it under a different license.
-> For example, CNCF, the Cloud Native Computing Foundation, requires netdata
+> help Netdata grow we may have to distribute it under a different license.
+> For example, CNCF, the Cloud Native Computing Foundation, requires Netdata
 > to be licensed under Apache-2.0 for it to be accepted as a member of the
 > Foundation. We want to be free to do it.*
 
@@ -43,9 +43,9 @@ original creation and that he is legally entitled to grant the above license.
 
 > *Note: if you are committing third party code, please make sure the third party
 > license or any other restrictions are also included with your commits.
-> netdata includes many third party libraries and tools and this is not a
+> Netdata includes many third party libraries and tools and this is not a
 > problem, provided that the license of the third party code is compatible with
-> the one we use for netdata.*
+> the one we use for Netdata.*
 
 ## signature
 
@@ -66,7 +66,7 @@ are subject to this agreement.
 > 1. add your github username and name in this file
 > 2. commit it to the repo with a PR, using the same github username, or include this change in your first PR.
 
-# netdata contributors
+# Netdata contributors
 
 This is the list of contributors that have signed this agreement:
 

+ 4 - 4
README.md

@@ -154,17 +154,17 @@ not just visualize metrics.
 
 Release v1.16.0 contains 40 bug fixes, 31 improvements and 20 documentation updates
 
-**Binary distributions.** To improve the security, speed and reliability of new netdata installations, we are delivering our own, industry standard installation method, with binary package distributions. The RPM binaries for the most common OSs are already available on packagecloud and we’ll have the DEB ones available very soon. All distributions are considered in Beta and, as always, we depend on our amazing community for feedback on improvements.
+**Binary distributions.** To improve the security, speed and reliability of new Netdata installations, we are delivering our own, industry standard installation method, with binary package distributions. The RPM binaries for the most common OSs are already available on packagecloud and we’ll have the DEB ones available very soon. All distributions are considered in Beta and, as always, we depend on our amazing community for feedback on improvements.
 
  - Our stable distributions are at [netdata/netdata @ packagecloud.io](https://packagecloud.io/netdata/netdata)
  - The nightly builds are at [netdata/netdata-edge @ packagecloud.io](https://packagecloud.io/netdata/netdata-edge)
 
 **Netdata now supports TLS encryption!** You can secure the communication to the [web server](https://docs.netdata.cloud/web/server/#enabling-tls-support), the [streaming connections from slaves to the master](https://docs.netdata.cloud/streaming/#securing-the-communication) and the connection to an [openTSDB backend](https://docs.netdata.cloud/backends/opentsdb/#https). 
 
-**This version also brings two long-awaited features to netdata’s health monitoring:**
+**This version also brings two long-awaited features to Netdata’s health monitoring:**
 
- - The [health management API](https://docs.netdata.cloud/web/api/health/#health-management-api) introduced in v1.12 allowed you to easily disable alarms and/or notifications while netdata was running. However, those changes were not persisted across netdata restarts. Since part of routine maintenance activities may involve completely restarting a monitoring node, netdata now saves these configurations to disk, every time you issue a command to change the silencer settings. The new [LIST command](https://docs.netdata.cloud/web/api/health/#list-silencers) of the API allows you to view at any time which alarms are currently disabled or silenced.
- - A way for netdata to [repeatedly send alarm notifications](https://docs.netdata.cloud/health/#alarm-line-repeat) for some, or all active alarms, at a frequency of your choosing. As a result, you will no longer have to worry about missing a notification, forgetting about a raised alarm. The default is still to only send a single notification, so that existing users are not surprised by a different behavior.  
+ - The [health management API](https://docs.netdata.cloud/web/api/health/#health-management-api) introduced in v1.12 allowed you to easily disable alarms and/or notifications while Netdata was running. However, those changes were not persisted across Netdata restarts. Since part of routine maintenance activities may involve completely restarting a monitoring node, Netdata now saves these configurations to disk, every time you issue a command to change the silencer settings. The new [LIST command](https://docs.netdata.cloud/web/api/health/#list-silencers) of the API allows you to view at any time which alarms are currently disabled or silenced.
+ - A way for Netdata to [repeatedly send alarm notifications](https://docs.netdata.cloud/health/#alarm-line-repeat) for some, or all active alarms, at a frequency of your choosing. As a result, you will no longer have to worry about missing a notification, forgetting about a raised alarm. The default is still to only send a single notification, so that existing users are not surprised by a different behavior.  
 
 As always, we’ve introduced new collectors, 5 of them this time:
 

+ 4 - 4
REDISTRIBUTED.md

@@ -1,16 +1,16 @@
 # Redistributed software
 
-netdata copyright info:
+Netdata copyright info:
  Copyright 2016-2018, Costa Tsaousis.
  Copyright 2018, Netdata Inc.
  Released under [GPL v3 or later](LICENSE).
 
-netdata uses SPDX license tags to identify the license for its files.
+Netdata uses SPDX license tags to identify the license for its files.
 Individual licenses referenced in the tags are available on the [SPDX project site](http://spdx.org/licenses/).
 
-netdata redistributes the following third-party software.
+Netdata redistributes the following third-party software.
 We have decided to redistribute all these, instead of using them
-through a CDN, to allow netdata to work in cases where Internet
+through a CDN, to allow Netdata to work in cases where Internet
 connectivity is not available.
 
 - [Dygraphs](http://dygraphs.com/)

+ 37 - 37
backends/README.md

@@ -1,15 +1,15 @@
 # Metrics long term archiving
 
-netdata supports backends for archiving the metrics, or providing long term dashboards,
+Netdata supports backends for archiving the metrics, or providing long term dashboards,
 using Grafana or other tools, like this:
 
 ![image](https://cloud.githubusercontent.com/assets/2662304/20649711/29f182ba-b4ce-11e6-97c8-ab2c0ab59833.png)
 
-Since netdata collects thousands of metrics per server per second, which would easily congest any backend
-server when several netdata servers are sending data to it, netdata allows sending metrics at a lower
+Since Netdata collects thousands of metrics per server per second, which would easily congest any backend
+server when several Netdata servers are sending data to it, Netdata allows sending metrics at a lower
 frequency, by resampling them.
 
-So, although netdata collects metrics every second, it can send to the backend servers averages or sums every
+So, although Netdata collects metrics every second, it can send to the backend servers averages or sums every
 X seconds (though, it can send them per second if you need it to).
 
 ## features
@@ -30,7 +30,7 @@ X seconds (though, it can send them per second if you need it to).
 
      metrics are sent to a document db, `JSON` formatted.
 
-   - **prometheus** is described at [prometheus page](prometheus/) since it pulls data from netdata.
+   - **prometheus** is described at [prometheus page](prometheus/) since it pulls data from Netdata.
 
    - **prometheus remote write** (a binary snappy-compressed protocol buffer encoding over HTTP used by
      **Elasticsearch**, **Gnocchi**, **Graphite**, **InfluxDB**, **Kafka**, **OpenTSDB**,
@@ -54,26 +54,26 @@ X seconds (though, it can send them per second if you need it to).
    So, counters are sent as counters and gauges are sent as gauges, much like all data collectors do.
    For example, to calculate CPU utilization in this format, you need to know how to convert kernel ticks to percentage.
 
-   - `average` sends to backends normalized metrics from the netdata database.
-   In this mode, all metrics are sent as gauges, in the units netdata uses. This abstracts data collection
+   - `average` sends to backends normalized metrics from the Netdata database.
+   In this mode, all metrics are sent as gauges, in the units Netdata uses. This abstracts data collection
    and simplifies visualization, but you will not be able to copy and paste queries from other sources to convert units.
-   For example, CPU utilization percentage is calculated by netdata, so netdata will convert ticks to percentage and
+   For example, CPU utilization percentage is calculated by Netdata, so Netdata will convert ticks to percentage and
    send the average percentage to the backend.
 
-   - `sum` or `volume`: the sum of the interpolated values shown on the netdata graphs is sent to the backend.
-   So, if netdata is configured to send data to the backend every 10 seconds, the sum of the 10 values shown on the
-   netdata charts will be used.
+   - `sum` or `volume`: the sum of the interpolated values shown on the Netdata graphs is sent to the backend.
+   So, if Netdata is configured to send data to the backend every 10 seconds, the sum of the 10 values shown on the
+   Netdata charts will be used.
 
 Time-series databases suggest to collect the raw values (`as-collected`). If you plan to invest on building your monitoring around a time-series database and you already know (or you will invest in learning) how to convert units and normalize the metrics in Grafana or other visualization tools, we suggest to use `as-collected`.
 
-If, on the other hand, you just need long term archiving of netdata metrics and you plan to mainly work with netdata, we suggest to use `average`. It decouples visualization from data collection, so it will generally be a lot simpler. Furthermore, if you use `average`, the charts shown in the back-end will match exactly what you see in Netdata, which is not necessarily true for the other modes of operation.
+If, on the other hand, you just need long term archiving of Netdata metrics and you plan to mainly work with Netdata, we suggest to use `average`. It decouples visualization from data collection, so it will generally be a lot simpler. Furthermore, if you use `average`, the charts shown in the back-end will match exactly what you see in Netdata, which is not necessarily true for the other modes of operation.
 
-5. This code is smart enough, not to slow down netdata, independently of the speed of the backend server.
+5. This code is smart enough, not to slow down Netdata, independently of the speed of the backend server.
 
 ## configuration
 
 In `/etc/netdata/netdata.conf` you should have something like this (if not download the latest version
-of `netdata.conf` from your netdata):
+of `netdata.conf` from your Netdata):
 
 ```
 [backend]
@@ -82,7 +82,7 @@ of `netdata.conf` from your netdata):
     host tags = list of TAG=VALUE
     destination = space separated list of [PROTOCOL:]HOST[:PORT] - the first working will be used, or a region for kinesis
     data source = average | sum | as collected
-    prefix = netdata
+    prefix = Netdata
     hostname = my-name
     update every = 10
     buffer on failures = 10
@@ -122,13 +122,13 @@ of `netdata.conf` from your netdata):
    destination = [ffff:...:0001]:2003 10.11.12.1:2003
 ```
 
-   When multiple servers are defined, netdata will try the next one when the first one fails. This allows
-   you to load-balance different servers: give your backend servers in different order on each netdata.
+   When multiple servers are defined, Netdata will try the next one when the first one fails. This allows
+   you to load-balance different servers: give your backend servers in different order on each Netdata.
 
-   netdata also ships [`nc-backend.sh`](nc-backend.sh),
+   Netdata also ships [`nc-backend.sh`](nc-backend.sh),
    a script that can be used as a fallback backend to save the metrics to disk and push them to the
    time-series database when it becomes available again. It can also be used to monitor / trace / debug
-   the metrics netdata generates.
+   the metrics Netdata generates.
 
    For kinesis backend `destination` should be set to an AWS region (for example, `us-east-1`).
 
@@ -138,16 +138,16 @@ of `netdata.conf` from your netdata):
 - `hostname = my-name`, is the hostname to be used for sending data to the backend server. By default
    this is `[global].hostname`.
 
-- `prefix = netdata`, is the prefix to add to all metrics.
+- `prefix = Netdata`, is the prefix to add to all metrics.
 
-- `update every = 10`, is the number of seconds between sending data to the backend. netdata will add
-   some randomness to this number, to prevent stressing the backend server when many netdata servers send
+- `update every = 10`, is the number of seconds between sending data to the backend. Netdata will add
+   some randomness to this number, to prevent stressing the backend server when many Netdata servers send
    data to the same backend. This randomness does not affect the quality of the data, only the time they
    are sent.
 
 - `buffer on failures = 10`, is the number of iterations (each iteration is `[backend].update every` seconds)
    to buffer data, when the backend is not available. If the backend fails to receive the data after that
-   many failures, data loss on the backend is expected (netdata will also log it).
+   many failures, data loss on the backend is expected (Netdata will also log it).
 
 - `timeout ms = 20000`, is the timeout in milliseconds to wait for the backend server to process the data.
    By default this is `2 * update_every * 1000`.
@@ -155,7 +155,7 @@ of `netdata.conf` from your netdata):
 - `send hosts matching = localhost *` includes one or more space separated patterns, using ` * ` as wildcard
    (any number of times within each pattern). The patterns are checked against the hostname (the localhost
    is always checked as `localhost`), allowing us to filter which hosts will be sent to the backend when
-   this netdata is a central netdata aggregating multiple hosts. A pattern starting with ` ! ` gives a
+   this Netdata is a central Netdata aggregating multiple hosts. A pattern starting with ` ! ` gives a
    negative match. So to match all hosts named `*db*` except hosts containing `*slave*`, use
    `!*slave* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive
    or negative).
@@ -166,8 +166,8 @@ of `netdata.conf` from your netdata):
    except charts ending in `*reads`, use `!*reads apps.*` (so, the order is important: the first pattern
    matching the chart id or the chart name will be used - positive or negative).
 
-- `send names instead of ids = yes | no` controls the metric names netdata should send to backend.
-   netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read
+- `send names instead of ids = yes | no` controls the metric names Netdata should send to backend.
+   Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read
    by the system and names are human friendly labels (also unique). Most charts and metrics have the same
    ID and name, but in several cases they are different: disks with device-mapper, interrupts, QoS classes,
    statsd synthetic charts, etc.
@@ -176,26 +176,26 @@ of `netdata.conf` from your netdata):
    These are currently only sent to opentsdb and prometheus. Please use the appropriate format for each
    time-series db. For example opentsdb likes them like `TAG1=VALUE1 TAG2=VALUE2`, but prometheus like
    `tag1="value1",tag2="value2"`. Host tags are mirrored with database replication (streaming of metrics
-   between netdata servers).
+   between Netdata servers).
 
 ## monitoring operation
 
-netdata provides 5 charts:
+Netdata provides 5 charts:
 
-1. **Buffered metrics**, the number of metrics netdata added to the buffer for dispatching them to the
+1. **Buffered metrics**, the number of metrics Netdata added to the buffer for dispatching them to the
    backend server.
 
-2. **Buffered data size**, the amount of data (in KB) netdata added the buffer.
+2. **Buffered data size**, the amount of data (in KB) Netdata added the buffer.
 
-3. ~~**Backend latency**, the time the backend server needed to process the data netdata sent.
+3. ~~**Backend latency**, the time the backend server needed to process the data Netdata sent.
    If there was a re-connection involved, this includes the connection time.~~
-   (this chart has been removed, because it only measures the time netdata needs to give the data
-   to the O/S - since the backend servers do not ack the reception, netdata does not have any means
+   (this chart has been removed, because it only measures the time Netdata needs to give the data
+   to the O/S - since the backend servers do not ack the reception, Netdata does not have any means
    to measure this properly).
 
-4. **Backend operations**, the number of operations performed by netdata.
+4. **Backend operations**, the number of operations performed by Netdata.
 
-5. **Backend thread CPU usage**, the CPU resources consumed by the netdata thread, that is responsible
+5. **Backend thread CPU usage**, the CPU resources consumed by the Netdata thread, that is responsible
    for sending the metrics to the backend server.
 
 ![image](https://cloud.githubusercontent.com/assets/2662304/20463536/eb196084-af3d-11e6-8ee5-ddbd3b4d8449.png)
@@ -204,12 +204,12 @@ netdata provides 5 charts:
 
 The latest version of the alarms configuration for monitoring the backend is [here](../health/health.d/backend.conf)
 
-netdata adds 4 alarms:
+Netdata adds 4 alarms:
 
 1. `backend_last_buffering`, number of seconds since the last successful buffering of backend data
 2. `backend_metrics_sent`, percentage of metrics sent to the backend server
 3. `backend_metrics_lost`, number of metrics lost due to repeating failures to contact the backend server
-4. ~~`backend_slow`, the percentage of time between iterations needed by the backend time to process the data sent by netdata~~ (this was misleading and has been removed).
+4. ~~`backend_slow`, the percentage of time between iterations needed by the backend time to process the data sent by Netdata~~ (this was misleading and has been removed).
 
 ![image](https://cloud.githubusercontent.com/assets/2662304/20463779/a46ed1c2-af43-11e6-91a5-07ca4533cac3.png)
 

+ 3 - 3
backends/WALKTHROUGH.md

@@ -41,7 +41,7 @@ visibility into your application and systems performance.
 
 ## Getting Started - Netdata
 To begin let’s create our container which we will install Netdata on. We need
-to run a container, forward the necessary port that netdata listens on, and
+to run a container, forward the necessary port that Netdata listens on, and
 attach a tty so we can interact with the bash shell on the container. But
 before we do this we want name resolution between the two containers to work.
 In order to accomplish this we will create a user-defined network and attach
@@ -68,7 +68,7 @@ be sitting inside the shell of the container.
 
 After we have entered the shell we can install Netdata. This process could not
 be easier. If you take a look at [this link](../packaging/installer/#installation), the Netdata devs give us
-several one-liners to install netdata. I have not had any issues with these one
+several one-liners to install Netdata. I have not had any issues with these one
 liners and their bootstrapping scripts so far (If you guys run into anything do
 share). Run the following command in your container.
 
@@ -97,7 +97,7 @@ Netdata dashboard.
 ![](https://github.com/ldelossa/NetdataTutorial/raw/master/Screen%20Shot%202017-07-28%20at%204.00.45%20PM.png)
 
 This CHART is called ‘system.cpu’, The FAMILY is cpu, and the DIMENSION we are
-observing is “system”. You can begin to draw links between the charts in netdata
+observing is “system”. You can begin to draw links between the charts in Netdata
 to the prometheus metrics format in this manner.
 
 ## Prometheus

+ 4 - 4
backends/aws_kinesis/README.md

@@ -1,8 +1,8 @@
-# Using netdata with AWS Kinesis Data Streams
+# Using Netdata with AWS Kinesis Data Streams
 
 ## Prerequisites
 
-To use AWS Kinesis as a backend AWS SDK for C++ should be [installed](https://docs.aws.amazon.com/en_us/sdk-for-cpp/v1/developer-guide/setup.html) first. `libcrypto`, `libssl`, and `libcurl` are also required to compile netdata with Kinesis support enabled. Next, netdata should be re-installed from the source. The installer will detect that the required libraries are now available.
+To use AWS Kinesis as a backend AWS SDK for C++ should be [installed](https://docs.aws.amazon.com/en_us/sdk-for-cpp/v1/developer-guide/setup.html) first. `libcrypto`, `libssl`, and `libcurl` are also required to compile Netdata with Kinesis support enabled. Next, Netdata should be re-installed from the source. The installer will detect that the required libraries are now available.
 
 If the AWS SDK for C++ is being installed from source, it is useful to set `-DBUILD_ONLY="kinesis"`. Otherwise, the building process could take a very long time. Take a note, that the default installation path for the libraries is `/usr/local/lib64`. Many Linux distributions don't include this path as the default one for a library search, so it is advisable to use the following options to `cmake` while building the AWS SDK:
 
@@ -21,7 +21,7 @@ To enable data sending to the kinesis backend set the following options in `netd
 ```
 set the `destination` option to an AWS region.
 
-In the netdata configuration directory run `./edit-config aws_kinesis.conf` and set AWS credentials and stream name:
+In the Netdata configuration directory run `./edit-config aws_kinesis.conf` and set AWS credentials and stream name:
 ```
 # AWS credentials
 aws_access_key_id = your_access_key_id
@@ -32,7 +32,7 @@ stream name = your_stream_name
 ```
 Alternatively, AWS credentials can be set for the *netdata* user using AWS SDK for C++ [standard methods](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/credentials.html).
 
-A partition key for every record is computed automatically by the netdata with the purpose to distribute records across available shards evenly.
+A partition key for every record is computed automatically by Netdata with the purpose to distribute records across available shards evenly.
 
 
 [![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2Faws_kinesis%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

+ 40 - 40
backends/prometheus/README.md

@@ -1,32 +1,32 @@
-# Using netdata with Prometheus
+# Using Netdata with Prometheus
 
-> IMPORTANT: the format netdata sends metrics to prometheus has changed since netdata v1.7. The new prometheus backend for netdata supports a lot more features and is aligned to the development of the rest of the netdata backends.
+> IMPORTANT: the format Netdata sends metrics to prometheus has changed since Netdata v1.7. The new prometheus backend for Netdata supports a lot more features and is aligned to the development of the rest of the Netdata backends.
 
-Prometheus is a distributed monitoring system which offers a very simple setup along with a robust data model. Recently netdata added support for Prometheus. I'm going to quickly show you how to install both netdata and prometheus on the same server. We can then use grafana pointed at Prometheus to obtain long term metrics netdata offers. I'm assuming we are starting at a fresh ubuntu shell (whether you'd like to follow along in a VM or a cloud instance is up to you).
+Prometheus is a distributed monitoring system which offers a very simple setup along with a robust data model. Recently Netdata added support for Prometheus. I'm going to quickly show you how to install both Netdata and prometheus on the same server. We can then use grafana pointed at Prometheus to obtain long term metrics Netdata offers. I'm assuming we are starting at a fresh ubuntu shell (whether you'd like to follow along in a VM or a cloud instance is up to you).
 
 
-## Installing netdata and prometheus
+## Installing Netdata and prometheus
 
-### Installing netdata
+### Installing Netdata
 
-There are number of ways to install netdata according to [Installation](../../packaging/installer/#installation)  
-The suggested way of installing the latest netdata and keep it upgrade automatically. Using one line installation:
+There are number of ways to install Netdata according to [Installation](../../packaging/installer/#installation)  
+The suggested way of installing the latest Netdata and keep it upgrade automatically. Using one line installation:
 
 ```
 bash <(curl -Ss https://my-netdata.io/kickstart.sh)
 ```
 
-At this point we should have netdata listening on port 19999. Attempt to take your browser here:
+At this point we should have Netdata listening on port 19999. Attempt to take your browser here:
 
 ```
 http://your.netdata.ip:19999
 ```
 
-*(replace `your.netdata.ip` with the IP or hostname of the server running netdata)*
+*(replace `your.netdata.ip` with the IP or hostname of the server running Netdata)*
 
 ### Installing Prometheus
 
-In order to install prometheus we are going to introduce our own systemd startup script along with an example of prometheus.yaml configuration. Prometheus needs to be pointed to your server at a specific target url for it to scrape netdata's api. Prometheus is always a pull model meaning netdata is the passive client within this architecture. Prometheus always initiates the connection with netdata.
+In order to install prometheus we are going to introduce our own systemd startup script along with an example of prometheus.yaml configuration. Prometheus needs to be pointed to your server at a specific target url for it to scrape Netdata's api. Prometheus is always a pull model meaning Netdata is the passive client within this architecture. Prometheus always initiates the connection with Netdata.
 
 #### Download Prometheus
 
@@ -57,7 +57,7 @@ sudo tar -xvf /tmp/prometheus-2.3.2.linux-amd64.tar.gz -C /opt/prometheus --stri
 
 We will use the following `prometheus.yml` file. Save it at `/opt/prometheus/prometheus.yml`.
 
-Make sure to replace `your.netdata.ip` with the IP or hostname of the host running netdata. 
+Make sure to replace `your.netdata.ip` with the IP or hostname of the host running Netdata. 
 
 ```yaml
 # my global config
@@ -101,7 +101,7 @@ scrape_configs:
       #source: [as-collected]
       #
       # server name for this prometheus - the default is the client IP
-      # for netdata to uniquely identify it
+      # for Netdata to uniquely identify it
       #server: ['prometheus1']
     honor_labels: true
 
@@ -180,21 +180,21 @@ sudo systemctl enable prometheus
 
 Prometheus should now start and listen on port 9090. Attempt to head there with your browser. 
 
-If everything is working correctly when you fetch `http://your.prometheus.ip:9090` you will see a 'Status' tab. Click this and click on 'targets' We should see the netdata host as a scraped target. 
+If everything is working correctly when you fetch `http://your.prometheus.ip:9090` you will see a 'Status' tab. Click this and click on 'targets' We should see the Netdata host as a scraped target. 
 
 ---
 
 ## Netdata support for prometheus
 
-> IMPORTANT: the format netdata sends metrics to prometheus has changed since netdata v1.6. The new format allows easier queries for metrics and supports both `as collected` and normalized metrics.
+> IMPORTANT: the format Netdata sends metrics to prometheus has changed since Netdata v1.6. The new format allows easier queries for metrics and supports both `as collected` and normalized metrics.
 
-Before explaining the changes, we have to understand the key differences between netdata and prometheus.
+Before explaining the changes, we have to understand the key differences between Netdata and prometheus.
 
-### understanding netdata metrics
+### understanding Netdata metrics
 
 ##### charts
 
-Each chart in netdata has several properties (common to all its metrics):
+Each chart in Netdata has several properties (common to all its metrics):
 
 - `chart_id` - uniquely identifies a chart.
 
@@ -208,32 +208,32 @@ Each chart in netdata has several properties (common to all its metrics):
 
 ##### dimensions
 
-Then each netdata chart contains metrics called `dimensions`. All the dimensions of a chart have the same units of measurement, and are contextually in the same category (ie. the metrics for disk bandwidth are `read` and `write` and they are both in the same chart).
+Then each Netdata chart contains metrics called `dimensions`. All the dimensions of a chart have the same units of measurement, and are contextually in the same category (ie. the metrics for disk bandwidth are `read` and `write` and they are both in the same chart).
 
-### netdata data source
+### Netdata data source
 
 Netdata can send metrics to prometheus from 3 data sources:
 
-- `as collected` or `raw` - this data source sends the metrics to prometheus as they are collected. No conversion is done by netdata. The latest value for each metric is just given to prometheus. This is the most preferred method by prometheus, but it is also the harder to work with. To work with this data source, you will need to understand how to get meaningful values out of them.
+- `as collected` or `raw` - this data source sends the metrics to prometheus as they are collected. No conversion is done by Netdata. The latest value for each metric is just given to prometheus. This is the most preferred method by prometheus, but it is also the harder to work with. To work with this data source, you will need to understand how to get meaningful values out of them.
 
    The format of the metrics is: `CONTEXT{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
 
-   If the metric is a counter (`incremental` in netdata lingo), `_total` is appended the context.
+   If the metric is a counter (`incremental` in Netdata lingo), `_total` is appended the context.
 
-   Unlike prometheus, netdata allows each dimension of a chart to have a different algorithm and conversion constants (`multiplier` and `divisor`). In this case, that the dimensions of a charts are heterogeneous, netdata will use this format: `CONTEXT_DIMENSION{chart="CHART",family="FAMILY"}`
+   Unlike prometheus, Netdata allows each dimension of a chart to have a different algorithm and conversion constants (`multiplier` and `divisor`). In this case, that the dimensions of a charts are heterogeneous, Netdata will use this format: `CONTEXT_DIMENSION{chart="CHART",family="FAMILY"}`
 
-- `average` - this data source uses the netdata database to send the metrics to prometheus as they are presented on the netdata dashboard. So, all the metrics are sent as gauges, at the units they are presented in the netdata dashboard charts. This is the easiest to work with.
+- `average` - this data source uses the Netdata database to send the metrics to prometheus as they are presented on the Netdata dashboard. So, all the metrics are sent as gauges, at the units they are presented in the Netdata dashboard charts. This is the easiest to work with.
 
    The format of the metrics is: `CONTEXT_UNITS_average{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
 
-   When this source is used, netdata keeps track of the last access time for each prometheus server fetching the metrics. This last access time is used at the subsequent queries of the same prometheus server to identify the time-frame the `average` will be calculated. So, no matter how frequently prometheus scrapes netdata, it will get all the database data. To identify each prometheus server, netdata uses by default the IP of the client fetching the metrics. If there are multiple prometheus servers fetching data from the same netdata, using the same IP, each prometheus server can append `server=NAME` to the URL. Netdata will use this `NAME` to uniquely identify the prometheus server.
+   When this source is used, Netdata keeps track of the last access time for each prometheus server fetching the metrics. This last access time is used at the subsequent queries of the same prometheus server to identify the time-frame the `average` will be calculated. So, no matter how frequently prometheus scrapes Netdata, it will get all the database data. To identify each prometheus server, Netdata uses by default the IP of the client fetching the metrics. If there are multiple prometheus servers fetching data from the same Netdata, using the same IP, each prometheus server can append `server=NAME` to the URL. Netdata will use this `NAME` to uniquely identify the prometheus server.
 
 - `sum` or `volume`, is like `average` but instead of averaging the values, it sums them.
 
    The format of the metrics is: `CONTEXT_UNITS_sum{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
    All the other operations are the same with `average`. 
 
-Keep in mind that early versions of netdata were sending the metrics as: `CHART_DIMENSION{}`.
+Keep in mind that early versions of Netdata were sending the metrics as: `CHART_DIMENSION{}`.
 
 ### Querying Metrics
 
@@ -241,11 +241,11 @@ Fetch with your web browser this URL:
 
 `http://your.netdata.ip:19999/api/v1/allmetrics?format=prometheus&help=yes`
 
-*(replace `your.netdata.ip` with the ip or hostname of your netdata server)*
+*(replace `your.netdata.ip` with the ip or hostname of your Netdata server)*
 
-netdata will respond with all the metrics it sends to prometheus.
+Netdata will respond with all the metrics it sends to prometheus.
 
-If you search that page for `"system.cpu"` you will find all the metrics netdata is exporting to prometheus for this chart.  `system.cpu` is the chart name on the netdata dashboard (on the netdata dashboard all charts have a text heading such as : `Total CPU utilization (system.cpu)`. What we are interested here in the chart name: `system.cpu`).
+If you search that page for `"system.cpu"` you will find all the metrics Netdata is exporting to prometheus for this chart.  `system.cpu` is the chart name on the Netdata dashboard (on the Netdata dashboard all charts have a text heading such as : `Total CPU utilization (system.cpu)`. What we are interested here in the chart name: `system.cpu`).
 
 Searching for `"system.cpu"` reveals:
 
@@ -272,7 +272,7 @@ netdata_system_cpu_percentage_average{chart="system.cpu",family="cpu",dimension=
 # COMMENT netdata_system_cpu_percentage_average: dimension "idle", value is percentage, gauge, dt 1500066653 to 1500066662 inclusive
 netdata_system_cpu_percentage_average{chart="system.cpu",family="cpu",dimension="idle"} 92.3630770 1500066662000
 ```
-*(netdata response for `system.cpu` with source=`average`)*
+*(Netdata response for `system.cpu` with source=`average`)*
 
 In `average` or `sum` data sources, all values are normalized and are reported to prometheus as gauges. Now, use the 'expression' text form in prometheus. Begin to type the metrics we are looking for: `netdata_system_cpu`. You should see that the text form begins to auto-fill as prometheus knows about this metric.
 
@@ -302,13 +302,13 @@ netdata_system_cpu_total{chart="system.cpu",family="cpu",dimension="iowait"} 233
 netdata_system_cpu_total{chart="system.cpu",family="cpu",dimension="idle"} 918470 1500066716438
 ```
 
-*(netdata response for `system.cpu` with source=`as-collected`)*
+*(Netdata response for `system.cpu` with source=`as-collected`)*
 
 For more information check prometheus documentation.
 
 ### Streaming data from upstream hosts
 
-The `format=prometheus` parameter only exports the host's netdata metrics.  If you are using the master/slave functionality of netdata this ignores any upstream hosts - so you should consider using the below in your **prometheus.yml**:
+The `format=prometheus` parameter only exports the host's Netdata metrics.  If you are using the master/slave functionality of Netdata this ignores any upstream hosts - so you should consider using the below in your **prometheus.yml**:
 
 ```
     metrics_path: '/api/v1/allmetrics'
@@ -321,13 +321,13 @@ This will report all upstream host data, and `honor_labels` will make Prometheus
 
 ### Timestamps
 
-To pass the metrics through prometheus pushgateway, netdata supports the option `&timestamps=no` to send the metrics without timestamps.
+To pass the metrics through prometheus pushgateway, Netdata supports the option `&timestamps=no` to send the metrics without timestamps.
 
 ## Netdata host variables
 
-netdata collects various system configuration metrics, like the max number of TCP sockets supported, the max number of files allowed system-wide, various IPC sizes, etc. These metrics are not exposed to prometheus by default.
+Netdata collects various system configuration metrics, like the max number of TCP sockets supported, the max number of files allowed system-wide, various IPC sizes, etc. These metrics are not exposed to prometheus by default.
 
-To expose them, append `variables=yes` to the netdata URL.
+To expose them, append `variables=yes` to the Netdata URL.
 
 ### TYPE and HELP
 
@@ -335,7 +335,7 @@ To save bandwidth, and because prometheus does not use them anyway, `# TYPE` and
 
 ### Names and IDs
 
-netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names are human friendly labels (also unique).
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names are human friendly labels (also unique).
 
 Most charts and metrics have the same ID and name, but in several cases they are different: disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
 
@@ -353,7 +353,7 @@ You can overwrite it from prometheus, by appending to the URL:
 
 ### Filtering metrics sent to prometheus
 
-netdata can filter the metrics it sends to prometheus with this setting:
+Netdata can filter the metrics it sends to prometheus with this setting:
 
 ```
 [backend]
@@ -362,9 +362,9 @@ netdata can filter the metrics it sends to prometheus with this setting:
 
 This settings accepts a space separated list of patterns to match the **charts** to be sent to prometheus. Each pattern can use ` * ` as wildcard, any number of times (e.g `*a*b*c*` is valid). Patterns starting with ` ! ` give a negative match (e.g `!*.bad users.* groups.*` will send all the users and groups except `bad` user and `bad` group). The order is important: the first match (positive or negative) left to right, is used.
 
-### Changing the prefix of netdata metrics
+### Changing the prefix of Netdata metrics
 
-netdata sends all metrics prefixed with `netdata_`. You can change this in `netdata.conf`, like this:
+Netdata sends all metrics prefixed with `netdata_`. You can change this in `netdata.conf`, like this:
 
 ```
 [backend]
@@ -383,8 +383,8 @@ To get the metric names as they were before v1.12, append to the URL `&oldunits=
 
 ### Accuracy of `average` and `sum` data sources
 
-When the data source is set to `average` or `sum`, netdata remembers the last access of each client accessing prometheus metrics and uses this last access time to respond with the `average` or `sum` of all the entries in the database since that. This means that prometheus servers are not losing data when they access netdata with data source = `average` or `sum`.
+When the data source is set to `average` or `sum`, Netdata remembers the last access of each client accessing prometheus metrics and uses this last access time to respond with the `average` or `sum` of all the entries in the database since that. This means that prometheus servers are not losing data when they access Netdata with data source = `average` or `sum`.
 
-To uniquely identify each prometheus server, netdata uses the IP of the client accessing the metrics. If however the IP is not good enough for identifying a single prometheus server (e.g. when prometheus servers are accessing netdata through a web proxy, or when multiple prometheus servers are NATed to a single IP), each prometheus may append `&server=NAME` to the URL. This `NAME` is used by netdata to uniquely identify each prometheus server and keep track of its last access time.
+To uniquely identify each prometheus server, Netdata uses the IP of the client accessing the metrics. If however the IP is not good enough for identifying a single prometheus server (e.g. when prometheus servers are accessing Netdata through a web proxy, or when multiple prometheus servers are NATed to a single IP), each prometheus may append `&server=NAME` to the URL. This `NAME` is used by Netdata to uniquely identify each prometheus server and keep track of its last access time.
 
 [![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2Fprometheus%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

+ 1 - 1
backends/prometheus/remote_write/README.md

@@ -2,7 +2,7 @@
 
 ## Prerequisites
 
-To use the prometheus remote write API with [storage providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) [protobuf](https://developers.google.com/protocol-buffers/) and [snappy](https://github.com/google/snappy) libraries should be installed first. Next, netdata should be re-installed from the source. The installer will detect that the required libraries and utilities are now available.
+To use the prometheus remote write API with [storage providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) [protobuf](https://developers.google.com/protocol-buffers/) and [snappy](https://github.com/google/snappy) libraries should be installed first. Next, Netdata should be re-installed from the source. The installer will detect that the required libraries and utilities are now available.
 
 ## Configuration
 

Some files were not shown because too many files changed in this diff