Browse Source

Fix Markdown Lint warnings (#6664)

* make remark access all directories

* detailed fix after autofix by remark lint

* cross check autofix for this set of files

* crosscheck more files

* crosschecking and small fixes

* crosscheck autofixed md files
Promise Akpan 5 years ago
parent
commit
f5006d51e8
10 changed files with 1057 additions and 1049 deletions
  1. 15 16
      CODE_OF_CONDUCT.md
  2. 24 21
      CONTRIBUTING.md
  3. 75 73
      CONTRIBUTORS.md
  4. 7 9
      DOCUMENTATION.md
  5. 459 464
      HISTORICAL_CHANGELOG.md
  6. 258 225
      README.md
  7. 97 126
      REDISTRIBUTED.md
  8. 10 10
      SECURITY.md
  9. 99 100
      backends/README.md
  10. 13 5
      backends/WALKTHROUGH.md

+ 15 - 16
CODE_OF_CONDUCT.md

@@ -14,22 +14,22 @@ appearance, race, religion, or sexual identity and orientation.
 Examples of behavior that contributes to creating a positive environment
 include:
 
-* Using welcoming and inclusive language
-* Being respectful of differing viewpoints and experiences
-* Gracefully accepting constructive criticism
-* Focusing on what is best for the community
-* Showing empathy towards other community members
+-   Using welcoming and inclusive language
+-   Being respectful of differing viewpoints and experiences
+-   Gracefully accepting constructive criticism
+-   Focusing on what is best for the community
+-   Showing empathy towards other community members
 
 Examples of unacceptable behavior by participants include:
 
-* The use of sexualized language or imagery and unwelcome sexual attention or
-  advances
-* Trolling, insulting/derogatory comments, and personal or political attacks
-* Public or private harassment
-* Publishing others' private information, such as a physical or electronic
-  address, without explicit permission
-* Other conduct which could reasonably be considered inappropriate in a
-  professional setting
+-   The use of sexualized language or imagery and unwelcome sexual attention or
+    advances
+-   Trolling, insulting/derogatory comments, and personal or political attacks
+-   Public or private harassment
+-   Publishing others' private information, such as a physical or electronic
+    address, without explicit permission
+-   Other conduct which could reasonably be considered inappropriate in a
+    professional setting
 
 ## Our Responsibilities
 
@@ -68,9 +68,8 @@ members of the project's leadership.
 ## Attribution
 
 This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
-available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
+available at <https://www.contributor-covenant.org/version/1/4/code-of-conduct.html>
 
 [homepage]: https://www.contributor-covenant.org
 
-
-[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2FCODE_OF_CONDUCT&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
+[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2FCODE_OF_CONDUCT&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)

+ 24 - 21
CONTRIBUTING.md

@@ -32,26 +32,29 @@ Netdata is a complex system, with many integrations for the various  collectors,
 #### Sponsor a collector
 
 Netdata is all about simplicity and meaningful presentation. A "sponsor" for a collector does the following:
- - Assists the devs with feedback on the charts.
- - Specifies the alarms that would make sense for each metric.
- - When the implementation passes QA, tests the implementation in production.
- - Uses the charts and alarms in his/her day to day work and provides additional feedback.
- - Requests additional improvements as things change (e.g. new versions of an API are available).
+
+-   Assists the devs with feedback on the charts.
+-   Specifies the alarms that would make sense for each metric.
+-   When the implementation passes QA, tests the implementation in production.
+-   Uses the charts and alarms in his/her day to day work and provides additional feedback.
+-   Requests additional improvements as things change (e.g. new versions of an API are available).
 
 #### Sponsor a backend
 
 We already support various [backends](backends) and we intend to support more. A "sponsor" for a backend: 
-- Suggests ways in which the information in Netdata could best be exposed to the particular backend, to facilitate meaningful presentation.
- - When the implementation passes QA, tests the implementation in production.
-- Uses the backend in his/her day to day work and provides additional feedback, after the backend is delivered.
- - Requests additional improvements as things change (e.g. new versions of the backend API are available).
+
+-   Suggests ways in which the information in Netdata could best be exposed to the particular backend, to facilitate meaningful presentation.
+-   When the implementation passes QA, tests the implementation in production.
+-   Uses the backend in his/her day to day work and provides additional feedback, after the backend is delivered.
+-   Requests additional improvements as things change (e.g. new versions of the backend API are available).
 
 #### Sponsor a notification method
 
 Netdata delivers alarms via various [notification methods](health/notifications). A "sponsor" for a notification method:
-- Points the devs to the documentation for the API and identifies any unusual features of interest (e.g. the ability in Slack to send a notification either to a channel or to a user). 
-- Uses the notification method in production and provides feedback.
-- Requests additional improvements as things change (e.g. new versions of the API are available).
+
+-   Points the devs to the documentation for the API and identifies any unusual features of interest (e.g. the ability in Slack to send a notification either to a channel or to a user). 
+-   Uses the notification method in production and provides feedback.
+-   Requests additional improvements as things change (e.g. new versions of the API are available).
 
 ## Experienced Users
 
@@ -75,7 +78,6 @@ We expect most contributions to be for new data collection plugins. You can read
 
 Of course we appreciate contributions for any other part of the NetData agent, including the [daemon](daemon), [backends for long term archiving](backends/), innovative ways of using the [REST API](web/api) to create cool [Custom Dashboards](web/gui/custom/) or to include NetData charts in other applications, similarly to what can be done with [Confluence](web/gui/confluence/).
 
-
 ### Contributions Ground Rules
 
 #### Code of Conduct and CLA
@@ -131,16 +133,18 @@ The single most important rule when writing code is this: *check the surrounding
 We use several different languages and have had contributions from several people with different styles. When in doubt, you can check similar existing code. 
 
 For C contributions in particular, we try to respect the [Linux kernel style](https://www.kernel.org/doc/html/v4.10/process/coding-style.html), with the following exceptions:
- - Use 4 space indentation instead of 8
- - We occassionally have multiple statements on a single line (e.g. `if (a) b;`)
- - Allow max line length of 120 chars 
- - Allow opening brace at the end of a function declaration: `function() {`. 
+
+-   Use 4 space indentation instead of 8
+-   We occassionally have multiple statements on a single line (e.g. `if (a) b;`)
+-   Allow max line length of 120 chars 
+-   Allow opening brace at the end of a function declaration: `function() {`. 
 
 ### Your first pull request
 
 There are several guides for pull requests, such as the following:
-- https://thenewstack.io/getting-legit-with-git-and-github-your-first-pull-request/
-- https://github.com/firstcontributions/first-contributions#first-contributions
+
+-   <https://thenewstack.io/getting-legit-with-git-and-github-your-first-pull-request/>
+-   <https://github.com/firstcontributions/first-contributions#first-contributions>
 
 However, it's not always that simple. Our [PR approval process](#pr-approval-process) and the several merges we do every day may cause your fork to get behind the Netdata master. If you worked on something that has changed in the meantime, you will be required to do a git rebase, to bring your fork to the correct state. A very easy to follow guide on how to do it without learning all the intricacies of GitHub can be found [here](https://medium.com/@ruthmpardee/git-fork-workflow-using-rebase-587a144be470)
 
@@ -154,5 +158,4 @@ We also have a series of automated checks running, such as linters to check code
 
 One special type of automated check is the "WIP" check. You may add "[WIP]" to the title of the PR, to tell us that the particular request is "Work In Progress" and should not be merged. You're still not done with it, you created it to get some feedback. When you're ready to get the final approvals and get it merged, just remove the "[WIP]" string from the title of your PR and the "WIP" check will pass.
 
-
-[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2FCONTRIBUTING&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
+[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2FCONTRIBUTING&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)

+ 75 - 73
CONTRIBUTORS.md

@@ -17,115 +17,117 @@ contributions for any other purpose.
 
 ## copyright license
 
-The Contributor (*you*) grants Netdata Inc. a perpetual, worldwide, non-exclusive,
+The Contributor (_you_) grants Netdata Inc. a perpetual, worldwide, non-exclusive,
 no-charge, royalty-free, irrevocable copyright license to reproduce,
 prepare derivative works of, publicly display, publicly perform, sublicense,
 and distribute his contributions and such derivative works.
 
 ## copyright transfer
 
-The Contributor (*you*) hereby assigns Netdata Inc. copyright in his
+The Contributor (_you_) hereby assigns Netdata Inc. copyright in his
 contributions, to be licensed under the same terms as the rest of the code.
 
-> *Note: this means we may re-license Netdata (your contributions included)
+> _Note: this means we may re-license Netdata (your contributions included)
 > any way we see fit, without asking your permission. 
 > We intend to keep the Netdata agent forever FOSS.
 > But open-source licenses have significant differences and in our attempt to
 > help Netdata grow we may have to distribute it under a different license.
 > For example, CNCF, the Cloud Native Computing Foundation, requires Netdata
 > to be licensed under Apache-2.0 for it to be accepted as a member of the
-> Foundation. We want to be free to do it.*
+> Foundation. We want to be free to do it._
 
 ## original work
 
-The Contributor (*you*) represent that each of his contributions is his
+The Contributor (_you_) represent that each of his contributions is his
 original creation and that he is legally entitled to grant the above license.
 
-> *Note: if you are committing third party code, please make sure the third party
+> _Note: if you are committing third party code, please make sure the third party
 > license or any other restrictions are also included with your commits.
 > Netdata includes many third party libraries and tools and this is not a
 > problem, provided that the license of the third party code is compatible with
-> the one we use for Netdata.*
+> the one we use for Netdata._
 
 ## signature
 
-Since Sep 17th 2018, we use https://cla-assistant.io/netdata/netdata for signing the CLA, on all pull requests.
+Since Sep 17th 2018, we use <https://cla-assistant.io/netdata/netdata> for signing the CLA, on all pull requests.
 Old contributors can sign the CLA at any time using this link.
 
 ## HISTORICAL SIGNATURES
-(they have been imported to https://cla-assistant.io/netdata/netdata already)
 
-The Contributor (*you*) signs this agreement by adding his personal data in
+(they have been imported to <https://cla-assistant.io/netdata/netdata> already)
+
+The Contributor (_you_) signs this agreement by adding his personal data in
 this document and committing it to the project repo
 (the same way contributions are submitted to the project).
 
-By signing once, all contributions (past and future) of The Contributor (*you*),
+By signing once, all contributions (past and future) of The Contributor (_you_),
 are subject to this agreement.
 
-> *Note: so you have to:*
-> 1. add your github username and name in this file
-> 2. commit it to the repo with a PR, using the same github username, or include this change in your first PR.
+> _Note: so you have to:_
+>
+> 1.  add your github username and name in this file
+> 2.  commit it to the repo with a PR, using the same github username, or include this change in your first PR.
 
 # Netdata contributors
 
 This is the list of contributors that have signed this agreement:
 
-username|name|email (optional)
-:--------:|:----:|:----------------
-@lets00|Luís Eduardo|leduardo@lsd.ufcg.edu.br
-@ktsaou|Costa Tsaousis|costa@tsaousis.gr
-@tycho|Steven Noonan|steven@uplinklabs.net
-@philwhineray|Phil Whineray|
-@paulfantom|Paweł Krupa|pawel@krupa.net.pl
-@Ferroin|Austin S. Hemmelgarn|ahferroin7@gmail.com
-@glensc|Elan Ruusamäe|
-@l2isbad|Ilya Mashchenko|ilyamaschenko@gmail.com
-@rlefevre|Rémi Lefèvre|
-@vlvkobal|Vladimir Kobal|vlad@prokk.net
-@simonnagl|Simon Nagl|
-@manosf|Emmanouil Fokas|manosf@protonmail.com
-@user501254|Ashesh Singh|user501254@gmail.com
-@t-h-e|Stefan Forstenlechner|
-@facetoe|Facetoe|
-@ntlug|Christopher Cox|ccox@endlessnow.com
-@alonbl|Alon Bar-Lev|alon.barlev@gmail.com
-@Wing924|Wei He|weihe924stephen@gmail.com
-@NeonSludge|Kirill Buev|kirill.buev@gmx.com
-@kmlucy|Kyle Lucy|kmlucy@gmail.com
-@RicardoSette|Ricardo Sette|ricardosette@freebsdbrasil.com.br
-@383c57|Shinichi Tagashira|
-@davidak|David Kleuker|netdata-contributors+vyff@davidak.de
-@ccremer|Christian Cremer|
-@jimcooley|Jim Cooley|jim.cooley@healthvana.com
-@Chocobo1|Mike Tzou|
-@vinyasmusic|Vinyas Malagaudanavar|vinyasmusic@gmail.com
-@cosmix|Dimosthenis Kaponis|
-@shadycuz|Levi Blaney|shadycuz+spam@gmail.com
-@Flums|Philip Gabrielsen|philip@digno.no
-@domschl|Dominik Schlösser|dominik.schloesser@gmail.com
-@tioumen|Guillaume Hospital|
-@arch273|Jacob Ayres
-@x4FF3|David Fuellgraf|
-@jasonwbarnett|Jason Barnett|
-@ecowed|Ed Wade|
-@wungad|Rob Man|
-@rda0|Sven Mäder|maeder@phys.ethz.ch
-@alibo|Ali Borhani|aliborhani1@gmail.com
-@Nani-o|Sofiane Medjkoune|sofiane@medjkoune.fr
-@n0guest|Evgeniy K.|ask@osshelp.ru
-@amichelic|Adalbert Michelic|
-@abalabahaha|abalabahaha|hi@abal.moe
-@illes|Illes S.|
-@plasticrake|Patrick Seal
-@jonfairbanks|Jon Fairbanks
-@pjz|Paul Jimenez|pj@place.org
-@jgrossiord|Julien Grossiord|julien@grossiord.net
-@pohzipohzi|Poh Zi How
-@vladmovchan|Vladyslav Movchan|vladislav.movchan@gmail.com
-@gmosx|George Moschovitis
-@adherzog|Adam Herzog|adam@adamherzog.com
-@skrzyp1|Jerzy S.|
-@akwan|Alan Kwan|
-@underhood|Timotej Šiškovič|
-
-[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2FCONTRIBUTORS&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
+|username|name|email (optional)|
+|:------:|:--:|:---------------|
+|@lets00|Luís Eduardo|leduardo@lsd.ufcg.edu.br|
+|@ktsaou|Costa Tsaousis|costa@tsaousis.gr|
+|@tycho|Steven Noonan|steven@uplinklabs.net|
+|@philwhineray|Phil Whineray||
+|@paulfantom|Paweł Krupa|pawel@krupa.net.pl|
+|@Ferroin|Austin S. Hemmelgarn|ahferroin7@gmail.com|
+|@glensc|Elan Ruusamäe||
+|@l2isbad|Ilya Mashchenko|ilyamaschenko@gmail.com|
+|@rlefevre|Rémi Lefèvre||
+|@vlvkobal|Vladimir Kobal|vlad@prokk.net|
+|@simonnagl|Simon Nagl||
+|@manosf|Emmanouil Fokas|manosf@protonmail.com|
+|@user501254|Ashesh Singh|user501254@gmail.com|
+|@t-h-e|Stefan Forstenlechner||
+|@facetoe|Facetoe||
+|@ntlug|Christopher Cox|ccox@endlessnow.com|
+|@alonbl|Alon Bar-Lev|alon.barlev@gmail.com|
+|@Wing924|Wei He|weihe924stephen@gmail.com|
+|@NeonSludge|Kirill Buev|kirill.buev@gmx.com|
+|@kmlucy|Kyle Lucy|kmlucy@gmail.com|
+|@RicardoSette|Ricardo Sette|ricardosette@freebsdbrasil.com.br|
+|@383c57|Shinichi Tagashira||
+|@davidak|David Kleuker|netdata-contributors+vyff@davidak.de|
+|@ccremer|Christian Cremer||
+|@jimcooley|Jim Cooley|jim.cooley@healthvana.com|
+|@Chocobo1|Mike Tzou||
+|@vinyasmusic|Vinyas Malagaudanavar|vinyasmusic@gmail.com|
+|@cosmix|Dimosthenis Kaponis||
+|@shadycuz|Levi Blaney|shadycuz+spam@gmail.com|
+|@Flums|Philip Gabrielsen|philip@digno.no|
+|@domschl|Dominik Schlösser|dominik.schloesser@gmail.com|
+|@tioumen|Guillaume Hospital||
+|@arch273|Jacob Ayres||
+|@x4FF3|David Fuellgraf||
+|@jasonwbarnett|Jason Barnett||
+|@ecowed|Ed Wade||
+|@wungad|Rob Man||
+|@rda0|Sven Mäder|maeder@phys.ethz.ch|
+|@alibo|Ali Borhani|aliborhani1@gmail.com|
+|@Nani-o|Sofiane Medjkoune|sofiane@medjkoune.fr|
+|@n0guest|Evgeniy K.|ask@osshelp.ru|
+|@amichelic|Adalbert Michelic||
+|@abalabahaha|abalabahaha|hi@abal.moe|
+|@illes|Illes S.||
+|@plasticrake|Patrick Seal||
+|@jonfairbanks|Jon Fairbanks||
+|@pjz|Paul Jimenez|pj@place.org|
+|@jgrossiord|Julien Grossiord|julien@grossiord.net|
+|@pohzipohzi|Poh Zi How||
+|@vladmovchan|Vladyslav Movchan|vladislav.movchan@gmail.com|
+|@gmosx|George Moschovitis||
+|@adherzog|Adam Herzog|adam@adamherzog.com|
+|@skrzyp1|Jerzy S.||
+|@akwan|Alan Kwan||
+|@underhood|Timotej Šiškovič||
+
+[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2FCONTRIBUTORS&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)

+ 7 - 9
DOCUMENTATION.md

@@ -2,7 +2,6 @@
 
 **Netdata is real-time health monitoring and performance troubleshooting for systems and applications.** It helps you instantly diagnose slowdowns and anomalies in your infrastructure with thousands of metrics, interactive visualizations, and insightful health alarms.
 
-
 ## Navigating the Netdata documentation
 
 Welcome! You've arrived at the documentation for Netdata. Use the links below to find answers to the most common questions about Netdata, such as how to install it, getting started guides, basic configuration, and adding more charts. Or, explore all of Netdata's documentation using the table of contents to your left.
@@ -30,19 +29,18 @@ Welcome! You've arrived at the documentation for Netdata. Use the links below to
 
 **Advanced users**: For those who already understand how to access a Netdata dashboard and perform basic configuration, feel free to see what's behind any of these other doors.
 
-  - [Netdata Behind Nginx](docs/Running-behind-nginx.md): Use an Nginx web server instead of Netdata's built-in server to enable TLS, HTTPS, and basic authentication.
-  - [Add More Charts](docs/Add-more-charts-to-netdata.md): Enable new internal or external plugins and understand when auto-detection works.
-  - [Performance](docs/Performance.md): Tips on running Netdata on devices with limited CPU and RAM resources, such as embedded devices, IoT, and edge devices.
-  - [Streaming](streaming/): Information for those who want to centralize Netdata metrics from any number of distributed agents.
-  - [Backends](backends/): Learn how to archive Netdata's real-time metrics to a time series database (like Prometheus) for long-term archiving.
-
+-   [Netdata Behind Nginx](docs/Running-behind-nginx.md): Use an Nginx web server instead of Netdata's built-in server to enable TLS, HTTPS, and basic authentication.
+-   [Add More Charts](docs/Add-more-charts-to-netdata.md): Enable new internal or external plugins and understand when auto-detection works.
+-   [Performance](docs/Performance.md): Tips on running Netdata on devices with limited CPU and RAM resources, such as embedded devices, IoT, and edge devices.
+-   [Streaming](streaming/): Information for those who want to centralize Netdata metrics from any number of distributed agents.
+-   [Backends](backends/): Learn how to archive Netdata's real-time metrics to a time series database (like Prometheus) for long-term archiving.
 
 Visit the [contributing guide](CONTRIBUTING.md), [contributing to documentation guide](docs/contributing/contributing-documentation.md), and [documentation style guide](docs/contributing/style-guide.md) to learn more about our community and how you can get started contributing to Netdata.
 
-
 ## Subscribe for news and tips from monitoring pros
 
 <script charset="utf-8" type="text/javascript" src="//js.hsforms.net/forms/shell.js"></script>
+
 <script>
   hbspt.forms.create({
     portalId: "4567453",
@@ -52,4 +50,4 @@ Visit the [contributing guide](CONTRIBUTING.md), [contributing to documentation
 
 ---
 
-![A GIF of the standard Netdata dashboard](https://user-images.githubusercontent.com/2662304/48346998-96cf3180-e685-11e8-9f4e-059d23aa3aa5.gif)
+![A GIF of the standard Netdata dashboard](https://user-images.githubusercontent.com/2662304/48346998-96cf3180-e685-11e8-9f4e-059d23aa3aa5.gif)

+ 459 - 464
HISTORICAL_CHANGELOG.md

@@ -1,655 +1,650 @@
 netdata (1.10.0) - 2018-03-27
 
  Please check full changelog at github.
- https://github.com/netdata/netdata/releases
- 
+ <https://github.com/netdata/netdata/releases>
 
 netdata (1.9.0) - 2017-12-17
 
  Please check full changelog at github.
- https://github.com/netdata/netdata/releases
- 
+ <https://github.com/netdata/netdata/releases>
 
 netdata (1.8.0) - 2017-09-17
 
  This is mainly a bugfix release.
  Please check full changelog at github.
- 
 
 netdata (1.7.0) - 2017-07-16
 
- * netdata is still spreading fast
+-   netdata is still spreading fast
 
-   we are at 320.000 users and 132.000 servers
+    we are at 320.000 users and 132.000 servers
 
-   Almost 100k new users, 52k new installations and 800k docker pulls
-   since the previous release, 4 and a half months ago.
+    Almost 100k new users, 52k new installations and 800k docker pulls
+    since the previous release, 4 and a half months ago.
 
-   netdata user base grows at about 1000 new users and 600 new servers
-   per day. Thank you. You are awesome.
+    netdata user base grows at about 1000 new users and 600 new servers
+    per day. Thank you. You are awesome.
 
- * The next release (v1.8) will be focused on providing a global health
-   monitoring service, for all netdata users, for free.
+-   The next release (v1.8) will be focused on providing a global health
+    monitoring service, for all netdata users, for free.
 
- * netdata is now a (very fast) fully featured statsd server and the
-   only one with automatic visualization: push a statsd metric and hit
-   F5 on the netdata dashboard: your metric visualized. It also supports
-   synthetic charts, defined by you, so that you can correlate and
-   visualize your application the way you like it.
+-   netdata is now a (very fast) fully featured statsd server and the
+    only one with automatic visualization: push a statsd metric and hit
+    F5 on the netdata dashboard: your metric visualized. It also supports
+    synthetic charts, defined by you, so that you can correlate and
+    visualize your application the way you like it.
 
- * netdata got new installation options
-   It is now easier than ever to install netdata - we also distribute a
-   statically linked netdata x86_64 binary, including key dependencies
-   (like bash, curl, etc) that can run everywhere a Linux kernel runs
-   (CoreOS, CirrOS, etc).
+-   netdata got new installation options
+    It is now easier than ever to install netdata - we also distribute a
+    statically linked netdata x86_64 binary, including key dependencies
+    (like bash, curl, etc) that can run everywhere a Linux kernel runs
+    (CoreOS, CirrOS, etc).
 
- * metrics streaming and replication has been improved significantly.
-   All known issues have been solved and key enhancements have been added.
-   Headless collectors and proxies can now send metrics to backends when
-   data source = as collected.
+-   metrics streaming and replication has been improved significantly.
+    All known issues have been solved and key enhancements have been added.
+    Headless collectors and proxies can now send metrics to backends when
+    data source = as collected.
 
- * backends have got quite a few enhancements, including host tags and
-   metrics filtering at the netdata side;
-   prometheus support has been re-written to utilize more prometheus
-   features and provide more flexibility and integration options.
+-   backends have got quite a few enhancements, including host tags and
+    metrics filtering at the netdata side;
+    prometheus support has been re-written to utilize more prometheus
+    features and provide more flexibility and integration options.
 
- * netdata now monitors ZFS (on Linux and FreeBSD), ElasticSearch,
-   RabbitMQ, Go applications (via expvar), ipfw (on FreeBSD 11), samba,
-   squid logs (with web_log plugin).
+-   netdata now monitors ZFS (on Linux and FreeBSD), ElasticSearch,
+    RabbitMQ, Go applications (via expvar), ipfw (on FreeBSD 11), samba,
+    squid logs (with web_log plugin).
 
- * netdata dashboard loading times have been improved significantly
-   (hit F5 a few times on a netdata dashboard - it is now amazingly fast),
-   to support dashboards with thousands of charts.
+-   netdata dashboard loading times have been improved significantly
+    (hit F5 a few times on a netdata dashboard - it is now amazingly fast),
+    to support dashboards with thousands of charts.
 
- * netdata alarms now support custom hooks, so you can run whatever you
-   like in parallel with netdata alarms.
-
- * As usual, this release brings dozens of more improvements, enhancements
-   and compatibility fixes.
+-   netdata alarms now support custom hooks, so you can run whatever you
+    like in parallel with netdata alarms.
 
+-   As usual, this release brings dozens of more improvements, enhancements
+    and compatibility fixes.
 
 netdata (1.6.0) - 2017-03-20
 
- * birthday release: 1 year netdata
+-   birthday release: 1 year netdata
 
-   netdata was first published on March 30th, 2016.
-   It has been a crazy year since then:
+    netdata was first published on March 30th, 2016.
+    It has been a crazy year since then:
 
-     225.000 unique netdata users
-             currently, at 1.000 new unique users per day
+      225.000 unique netdata users
+              currently, at 1.000 new unique users per day
 
-      80.000 unique netdata installations
-             currently, at 500 new installation per day
+       80.000 unique netdata installations
+              currently, at 500 new installation per day
 
-     610.000 docker pulls on docker hub
+      610.000 docker pulls on docker hub
 
-   4.000.000 netdata sessions served
-             currently, at 15.000 sessions served per day
+    4.000.000 netdata sessions served
+              currently, at 15.000 sessions served per day
 
-      20.000 github stars
+       20.000 github stars
 
-             Thank you!
-          You are awesome!
+    ```
+          Thank you!
+       You are awesome!
+    ```
 
- * central netdata is here
+-   central netdata is here
 
-   This is the first release that supports real-time streaming of
-   metrics between netdata servers.
+    This is the first release that supports real-time streaming of
+    metrics between netdata servers.
 
-   netdata can now be:
+    netdata can now be:
 
-   - autonomous host monitoring
-     (like it always has been)
+    -   autonomous host monitoring
+        (like it always has been)
 
-   - headless data collector
-     (collect and stream metrics in real-time to another netdata)
+    -   headless data collector
+        (collect and stream metrics in real-time to another netdata)
 
-   - headless proxy
-     (collect metrics from multiple netdata and stream them to another netdata)
+    -   headless proxy
+        (collect metrics from multiple netdata and stream them to another netdata)
 
-   - store and forward proxy
-     (like headless proxy, but with a local database)
+    -   store and forward proxy
+        (like headless proxy, but with a local database)
 
-   - central database
-     (metrics from multiple hosts are aggregated)
+    -   central database
+        (metrics from multiple hosts are aggregated)
 
-   metrics databases can be configured on all nodes and each node maintaining
-   a database may have a different retention policy and possibly run
-   (even different) alarms on them.
+    metrics databases can be configured on all nodes and each node maintaining
+    a database may have a different retention policy and possibly run
+    (even different) alarms on them.
 
- * monitoring ephemeral nodes
+-   monitoring ephemeral nodes
 
-   netdata now supports monitoring autoscaled ephemeral nodes,
-   that are started and stopped on demand (their IP is not known).
+    netdata now supports monitoring autoscaled ephemeral nodes,
+    that are started and stopped on demand (their IP is not known).
 
-   When the ephemeral nodes start streaming metrics to the central
-   netdata, the central netdata will show register them at "my-netdata"
-   menu on the dashboard.
+    When the ephemeral nodes start streaming metrics to the central
+    netdata, the central netdata will show register them at "my-netdata"
+    menu on the dashboard.
 
-   For more information check:
-   https://github.com/netdata/netdata/tree/master/streaming#monitoring-ephemeral-nodes
+    For more information check:
+    <https://github.com/netdata/netdata/tree/master/streaming#monitoring-ephemeral-nodes>
 
- * monitoring ephemeral containers and VM guests
+-   monitoring ephemeral containers and VM guests
 
-   netdata now cleans up container, guest VM, network interfaces and mounted
-   disk metrics, disabling automatically their alarms too.
+    netdata now cleans up container, guest VM, network interfaces and mounted
+    disk metrics, disabling automatically their alarms too.
 
-   For more information check:
-   https://github.com/netdata/netdata/tree/master/collectors/cgroups.plugin#monitoring-ephemeral-containers
+    For more information check:
+    <https://github.com/netdata/netdata/tree/master/collectors/cgroups.plugin#monitoring-ephemeral-containers>
 
- * apps.plugin ported for FreeBSD
+-   apps.plugin ported for FreeBSD
 
-   @vlvkobal has ported "apps.plugin" to FreeBSD. netdata can now provide
-   "Applications", "Users" and "User Groups" on FreeBSD.
+    @vlvkobal has ported "apps.plugin" to FreeBSD. netdata can now provide
+    "Applications", "Users" and "User Groups" on FreeBSD.
 
- * web_log plugin
+-   web_log plugin
 
-   @l2isbad has done a wonderful job creating a unified web log parsing plugin
-   for all kinds of web server logs. With it, netdata provides real-time
-   performance information and health monitoring alarms for web applications
-   and web sites!
+    @l2isbad has done a wonderful job creating a unified web log parsing plugin
+    for all kinds of web server logs. With it, netdata provides real-time
+    performance information and health monitoring alarms for web applications
+    and web sites!
 
-   For more information check:
-   https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/web_log#web_log
+    For more information check:
+    <https://github.com/netdata/netdata/blob/master/collectors/python.d.plugin/web_log#web_log>
 
- * backends
+-   backends
 
-   netdata can now archive metrics to `JSON` backends
-   (both push, by @lfdominguez, and pull modes).
+    netdata can now archive metrics to `JSON` backends
+    (both push, by @lfdominguez, and pull modes).
 
- * IPMI monitoring
+-   IPMI monitoring
 
-   netdata now has an IPMI plugin (based on freeipmi)
-   for monitoring server hardware.
+    netdata now has an IPMI plugin (based on freeipmi)
+    for monitoring server hardware.
 
-   The plugin creates (up to) 8 charts:
+    The plugin creates (up to) 8 charts:
 
-    1. number of sensors by state
-    2. number of events in SEL
-    3. Temperatures CELCIUS
-    4. Temperatures FAHRENHEIT
-    5. Voltages
-    6. Currents
-    7. Power
-    8. Fans
+    1.  number of sensors by state
+    2.  number of events in SEL
+    3.  Temperatures CELCIUS
+    4.  Temperatures FAHRENHEIT
+    5.  Voltages
+    6.  Currents
+    7.  Power
+    8.  Fans
 
-   It also supports alarms (including the number of sensors in critical state).
+    It also supports alarms (including the number of sensors in critical state).
 
-   For more information, check:
-   https://github.com/netdata/netdata/tree/master/collectors/freeipmi.plugin
+    For more information, check:
+    <https://github.com/netdata/netdata/tree/master/collectors/freeipmi.plugin>
 
- * new plugins
+-   new plugins
 
-   @l2isbad builds python data collection plugins for netdata at an wonderfull
-   rate! He rocks!
+    @l2isbad builds python data collection plugins for netdata at an wonderfull
+    rate! He rocks!
 
-    - **web_log** for monitoring in real-time all kinds of web server log files @l2isbad
-    - **freeipmi** for monitoring IPMI (server hardware)
-    - **nsd** (the [name server daemon](https://www.nlnetlabs.nl/projects/nsd/)) @383c57
-    - **mongodb** @l2isbad
-    - **smartd_log** (monitoring disk S.M.A.R.T. values) @l2isbad
+    -   **web_log** for monitoring in real-time all kinds of web server log files @l2isbad
+    -   **freeipmi** for monitoring IPMI (server hardware)
+    -   **nsd** (the [name server daemon](https://www.nlnetlabs.nl/projects/nsd/)) @383c57
+    -   **mongodb** @l2isbad
+    -   **smartd_log** (monitoring disk S.M.A.R.T. values) @l2isbad
 
- * improved plugins
+-   improved plugins
 
-    - **nfacct** reworked and now collects connection tracker information using netlink.
-    - **ElasticSearch** re-worked @l2isbad
-    - **mysql** re-worked to allow faster development of custom mysql based plugins (MySQLService) @l2isbad
-    - **SNMP**
-    - **tomcat** @NMcCloud
-    - **ap** (monitoring hostapd access points)
-    - **php_fpm** @l2isbad
-    - **postgres** @l2isbad
-    - **isc_dhcpd** @l2isbad
-    - **bind_rndc** @l2isbad
-    - **numa**
-    - **apps.plugin** improvements and freebsd support @vlvkobal
-    - **fail2ban** @l2isbad
-    - **freeradius** @l2isbad
-    - **nut** (monitoring UPSes)
-    - **tc** (Linux QoS) now works on qdiscs instead of classes for the same result (a lot faster) @t-h-e
-    - **varnish** @l2isbad
+    -   **nfacct** reworked and now collects connection tracker information using netlink.
+    -   **ElasticSearch** re-worked @l2isbad
+    -   **mysql** re-worked to allow faster development of custom mysql based plugins (MySQLService) @l2isbad
+    -   **SNMP**
+    -   **tomcat** @NMcCloud
+    -   **ap** (monitoring hostapd access points)
+    -   **php_fpm** @l2isbad
+    -   **postgres** @l2isbad
+    -   **isc_dhcpd** @l2isbad
+    -   **bind_rndc** @l2isbad
+    -   **numa**
+    -   **apps.plugin** improvements and freebsd support @vlvkobal
+    -   **fail2ban** @l2isbad
+    -   **freeradius** @l2isbad
+    -   **nut** (monitoring UPSes)
+    -   **tc** (Linux QoS) now works on qdiscs instead of classes for the same result (a lot faster) @t-h-e
+    -   **varnish** @l2isbad
 
- * new and improved alarms
-    - **web_log**, many alarms to detect common web site/API issues
-    - **fping**, alarms to detect packet loss, disconnects and unusually high latency
-    - **cpu**, cpu utilization alarm now ignores `nice`
+-   new and improved alarms
+    -   **web_log**, many alarms to detect common web site/API issues
+    -   **fping**, alarms to detect packet loss, disconnects and unusually high latency
+    -   **cpu**, cpu utilization alarm now ignores `nice`
 
- * new and improved alarm notification methods
-    - **HipChat** to allow hosted HipChat @frei-style
-    - **discordapp** @lowfive
+-   new and improved alarm notification methods
+    -   **HipChat** to allow hosted HipChat @frei-style
+    -   **discordapp** @lowfive
 
- * dashboard improvements
-    - dashboard now works on HiDPi screens
-    - dashboard now shows version of netdata
-    - dashboard now resets charts properly
-    - dashboard updated to use latest gauge.js release
+-   dashboard improvements
+    -   dashboard now works on HiDPi screens
+    -   dashboard now shows version of netdata
+    -   dashboard now resets charts properly
+    -   dashboard updated to use latest gauge.js release
 
- * other improvements
-    - thanks to @rlefevre netdata now uses a lot of different high resolution system clocks.
+-   other improvements
+    -   thanks to @rlefevre netdata now uses a lot of different high resolution system clocks.
 
  netdata has received a lot more improvements from many more  contributors!
 
  Thank you all!
 
-
 netdata (1.5.0) - 2017-01-22
 
- * yet another release that makes netdata the fastest
-   netdata ever!
-
- * netdata runs on FreeBSD, FreeNAS and MacOS !
-
-   Vladimir Kobal (@vlvkobal) has done a magnificent work
-   porting netdata to FreeBSD and MacOS.
-
-   Everyhing works: cpu, memory, disks performance, disks space,
-   network interfaces, interrupts, IPv4 metrics, IPv6 metrics
-   processes, context switches, softnet, IPC queues,
-   IPC semaphores, IPC shared memory, uptime, etc. Wow!
-
- * netdata supports data archiving to backend databases:
-
-    - Graphite
-    - OpenTSDB
-    - Prometheus
-
-   and of course all the compatible ones
-   (KairosDB, InfluxDB, Blueflood, etc)
-
- * new plugins:
-
-   Ilya Mashchenko (@l2isbad) has created most of the python
-   data collection plugins in this release !
-
-    - systemd Services (using cgroups!)
-    - FPing (yes, network latency in netdata!)
-    - postgres databases            @facetoe, @moumoul
-    - Vanish disk cache (v3 and v4) @l2isbad
-    - ElasticSearch                 @l2isbad
-    - HAproxy                       @l2isbad
-    - FreeRadius                    @l2isbad, @lgz
-    - mdstat (RAID)                 @l2isbad
-    - ISC bind (via rndc)           @l2isbad
-    - ISC dhcpd                     @l2isbad, @lgz
-    - Fail2Ban                      @l2isbad
-    - OpenVPN status log            @l2isbad, @lgz
-    - NUMA memory                   @tycho
-    - CPU Idle                      @tycho
-    - gunicorn log                  @deltaskelta
-    - ECC memory hardware errors
-    - IPC semaphores
-    - uptime plugin (with a nice badge too)
-
- * improved plugins:
-
-    - netfilter conntrack
-    - mysql (replication)           @l2isbad
-    - ipfs                          @pjz
-    - cpufreq                       @tycho
-    - hddtemp                       @l2isbad
-    - sensors                       @l2isbad
-    - nginx                         @leolovenet
-    - nginx_log                     @paulfantom
-    - phpfpm                        @leolovenet
-    - redis                         @leolovenet
-    - dovecot                       @justohall
-    - cgroups
-    - disk space
-    - apps.plugin
-    - /proc/interrupts              @rlefevre
-    - /proc/softirqs                @rlefevre
-    - /proc/vmstat       (system memory charts)
-    - /proc/net/snmp6    (IPv6 charts)
-    - /proc/self/meminfo (system memory charts)
-    - /proc/net/dev      (network interfaces)
-    - tc                 (linux QoS)
-
- * new/improved alarms:
-
-    - MySQL / MariaDB alarms (incl. replication)
-    - IPFS alarms
-    - HAproxy alarms
-    - UDP buffer alarms
-    - TCP AttemptFails
-    - ECC memory alarms
-    - netfilter connections alarms
-    - SNMP
-
- * new alarm notifications:
-
-    - messagebird.com               @tech-no-logical
-    - pagerduty.com                 @jimcooley
-    - pushbullet.com                @tperalta82
-    - twilio.com                    @shadycuz
-    - HipChat
-    - kafka
-
- * shell integration
-
-   - shell scripts can now query netdata easily!
-
- * dashboard improvements:
-   - dashboard is now faster on firefox, safari, opera, edge
-     (edge is still the slowest)
-   - dashboard now has a little bigger fonts
-   - SHIFT + mouse wheel to zoom charts, works on all browsers
-   - perfect-scrollbar on the dashboard
-   - dashboard 4K resolution fixes
-   - dashboard compatibility fixes for embedding charts in
-     third party web sites
-   - charts on custom dashboards can have common min/max
-     even if they come from different netdata servers
-   - alarm log is now saved and loaded back so that
-     the alarm history is available at the dashboard
-
- * other improvements:
-   - python.d.plugin has received way to many improvements
-     from many contributors!
-   - charts.d.plugin can now be forked to support
-     multiple independent instances
-   - registry has been re-factored to lower its memory
-     requirements (required for the public registry)
-   - simple patterns in cgroups, disks and alarms
-   - netdata-installer.sh can now correctly install
-     netdata in containers
-   - supplied logrotate script compatibility fixes
-   - spec cleanup                  @breed808
-   - clocks and timers reworked    @rlefevre
+-   yet another release that makes netdata the fastest
+    netdata ever!
+
+-   netdata runs on FreeBSD, FreeNAS and MacOS !
+
+    Vladimir Kobal (@vlvkobal) has done a magnificent work
+    porting netdata to FreeBSD and MacOS.
+
+    Everyhing works: cpu, memory, disks performance, disks space,
+    network interfaces, interrupts, IPv4 metrics, IPv6 metrics
+    processes, context switches, softnet, IPC queues,
+    IPC semaphores, IPC shared memory, uptime, etc. Wow!
+
+-   netdata supports data archiving to backend databases:
+
+    -   Graphite
+    -   OpenTSDB
+    -   Prometheus
+
+    and of course all the compatible ones
+    (KairosDB, InfluxDB, Blueflood, etc)
+
+-   new plugins:
+
+    Ilya Mashchenko (@l2isbad) has created most of the python
+    data collection plugins in this release !
+
+    -   systemd Services (using cgroups!)
+    -   FPing (yes, network latency in netdata!)
+    -   postgres databases            @facetoe, @moumoul
+    -   Vanish disk cache (v3 and v4) @l2isbad
+    -   ElasticSearch                 @l2isbad
+    -   HAproxy                       @l2isbad
+    -   FreeRadius                    @l2isbad, @lgz
+    -   mdstat (RAID)                 @l2isbad
+    -   ISC bind (via rndc)           @l2isbad
+    -   ISC dhcpd                     @l2isbad, @lgz
+    -   Fail2Ban                      @l2isbad
+    -   OpenVPN status log            @l2isbad, @lgz
+    -   NUMA memory                   @tycho
+    -   CPU Idle                      @tycho
+    -   gunicorn log                  @deltaskelta
+    -   ECC memory hardware errors
+    -   IPC semaphores
+    -   uptime plugin (with a nice badge too)
+
+-   improved plugins:
+
+    -   netfilter conntrack
+    -   mysql (replication)           @l2isbad
+    -   ipfs                          @pjz
+    -   cpufreq                       @tycho
+    -   hddtemp                       @l2isbad
+    -   sensors                       @l2isbad
+    -   nginx                         @leolovenet
+    -   nginx_log                     @paulfantom
+    -   phpfpm                        @leolovenet
+    -   redis                         @leolovenet
+    -   dovecot                       @justohall
+    -   cgroups
+    -   disk space
+    -   apps.plugin
+    -   /proc/interrupts              @rlefevre
+    -   /proc/softirqs                @rlefevre
+    -   /proc/vmstat       (system memory charts)
+    -   /proc/net/snmp6    (IPv6 charts)
+    -   /proc/self/meminfo (system memory charts)
+    -   /proc/net/dev      (network interfaces)
+    -   tc                 (linux QoS)
+
+-   new/improved alarms:
+
+    -   MySQL / MariaDB alarms (incl. replication)
+    -   IPFS alarms
+    -   HAproxy alarms
+    -   UDP buffer alarms
+    -   TCP AttemptFails
+    -   ECC memory alarms
+    -   netfilter connections alarms
+    -   SNMP
+
+-   new alarm notifications:
+
+    -   messagebird.com               @tech-no-logical
+    -   pagerduty.com                 @jimcooley
+    -   pushbullet.com                @tperalta82
+    -   twilio.com                    @shadycuz
+    -   HipChat
+    -   kafka
+
+-   shell integration
+
+    -   shell scripts can now query netdata easily!
+
+-   dashboard improvements:
+    -   dashboard is now faster on firefox, safari, opera, edge
+        (edge is still the slowest)
+    -   dashboard now has a little bigger fonts
+    -   SHIFT + mouse wheel to zoom charts, works on all browsers
+    -   perfect-scrollbar on the dashboard
+    -   dashboard 4K resolution fixes
+    -   dashboard compatibility fixes for embedding charts in
+        third party web sites
+    -   charts on custom dashboards can have common min/max
+        even if they come from different netdata servers
+    -   alarm log is now saved and loaded back so that
+        the alarm history is available at the dashboard
+
+-   other improvements:
+    -   python.d.plugin has received way to many improvements
+        from many contributors!
+    -   charts.d.plugin can now be forked to support
+        multiple independent instances
+    -   registry has been re-factored to lower its memory
+        requirements (required for the public registry)
+    -   simple patterns in cgroups, disks and alarms
+    -   netdata-installer.sh can now correctly install
+        netdata in containers
+    -   supplied logrotate script compatibility fixes
+    -   spec cleanup                  @breed808
+    -   clocks and timers reworked    @rlefevre
 
  netdata has received a lot more improvements from many more
  contributors!
 
  Thank you all guys!
- 
- 
+
 netdata (1.4.0) - 2016-10-04
 
  At a glance:
 
- - the fastest netdata ever (with a better look too)!
- - improved IoT and containers support!
- - alarms improved in almost every way!
-
- - new plugins:
-      softnet netdev,
-      extended TCP metrics,
-      UDPLite
-      NFS v2, v3 client (server was there already),
-      NFS v4 server & client,
-      APCUPSd,
-      RetroShare
-
- - improved plugins:
-      mysql,
-      cgroups,
-      hddtemp,
-      sensors,
-      phpfm,
-      tc (QoS)
+-   the fastest netdata ever (with a better look too)!
+
+-   improved IoT and containers support!
+
+-   alarms improved in almost every way!
+
+-   new plugins:
+       softnet netdev,
+       extended TCP metrics,
+       UDPLite
+       NFS v2, v3 client (server was there already),
+       NFS v4 server & client,
+       APCUPSd,
+       RetroShare
+
+-   improved plugins:
+       mysql,
+       cgroups,
+       hddtemp,
+       sensors,
+       phpfm,
+       tc (QoS)
 
  In detail:
 
- * improved alarms
+-   improved alarms
+
+    Many new alarms have been added to detect common kernel
+    configuration errors and old alarms have been re-worked
+    to avoid notification floods.
 
-   Many new alarms have been added to detect common kernel
-   configuration errors and old alarms have been re-worked
-   to avoid notification floods.
+    Alarms now support notification hysteresis (both static
+    and dynamic), notification self-cancellation, dynamic
+    thresholds based on current alarm status
 
-   Alarms now support notification hysteresis (both static
-   and dynamic), notification self-cancellation, dynamic
-   thresholds based on current alarm status
+-   improved alarm notifications
 
- * improved alarm notifications
+    netdata now supports:
 
-   netdata now supports:
+    -   email notifications
+    -   slack.com notifications on slack channels
+    -   pushover.net notifications (mobile push notifications)
+    -   telegram.org notifications
 
-   - email notifications
-   - slack.com notifications on slack channels
-   - pushover.net notifications (mobile push notifications)
-   - telegram.org notifications
+    For all the above methods, netdata supports role-based
+    notifications, with multiple recipients for each role
+    and severity filtering per recipient!
 
-   For all the above methods, netdata supports role-based
-   notifications, with multiple recipients for each role
-   and severity filtering per recipient!
+    Also, netdata support HTML5 notifications, while the
+    dashboard is open in a browser window (no need to be
+    the active one).
 
-   Also, netdata support HTML5 notifications, while the
-   dashboard is open in a browser window (no need to be
-   the active one).
+    All notifications are now clickable to get to the chart
+    that raised the alarm.
 
-   All notifications are now clickable to get to the chart
-   that raised the alarm.
+-   improved IoT support!
 
- * improved IoT support!
+    netdata builds and runs with musl libc and runs on systems
+    based on busybox.
 
-   netdata builds and runs with musl libc and runs on systems
-   based on busybox.
+-   improved containers support!
 
- * improved containers support!
+    netdata runs on alpine linux (a low profile linux distribution
+    used in containers).
 
-   netdata runs on alpine linux (a low profile linux distribution
-   used in containers).
+-   Dozens of other improvements and bugfixes
 
- * Dozens of other improvements and bugfixes
- 
- 
 netdata (1.3.0) - 2016-08-28
 
  At a glance:
 
- - netdata has health monitoring / alarms!
- - netdata has badges that can be embeded anywhere!
- - netdata plugins are now written in Python!
- - new plugins: redis, memcached, nginx_log, ipfs, apache_cache
+-   netdata has health monitoring / alarms!
+-   netdata has badges that can be embeded anywhere!
+-   netdata plugins are now written in Python!
+-   new plugins: redis, memcached, nginx_log, ipfs, apache_cache
 
  IMPORTANT:
  Since netdata now uses Python plugins, new packages are
  required to be installed on a system to allow it work.
  For more information, please check the installation page:
 
- https://github.com/netdata/netdata/tree/master/installer#installation
+ <https://github.com/netdata/netdata/tree/master/installer#installation>
 
  In detail:
 
- * netdata has alarms!
+-   netdata has alarms!
+
+    Based on the POLL we made on github
+    (<https://github.com/netdata/netdata/issues/436>),
+    health monitoring was the winner. So here it is!
 
-   Based on the POLL we made on github
-   (https://github.com/netdata/netdata/issues/436),
-   health monitoring was the winner. So here it is!
+    netdata now has a poweful health monitoring system embedded.
+    Please check the wiki page:
 
-   netdata now has a poweful health monitoring system embedded.
-   Please check the wiki page:
+    <https://github.com/netdata/netdata/tree/master/health>
 
-   https://github.com/netdata/netdata/tree/master/health
+-   netdata has badges!
 
- * netdata has badges!
+    netdata can generate badges with live information from the
+    collected metrics.
+    Please check the wiki page:
 
-   netdata can generate badges with live information from the
-   collected metrics.
-   Please check the wiki page:
+    <https://github.com/netdata/netdata/tree/master/web/api/badges>
 
-   https://github.com/netdata/netdata/tree/master/web/api/badges
+-   netdata plugins are now written in Python!
 
- * netdata plugins are now written in Python!
+    Thanks to the great work of Paweł Krupa (@paulfantom), most BASH
+    plugins have been ported to Python.
 
-   Thanks to the great work of Paweł Krupa (@paulfantom), most BASH
-   plugins have been ported to Python.
+    The new python.d.plugin supports both python2 and python3 and
+    data collection from multiple sources for all modules.
 
-   The new python.d.plugin supports both python2 and python3 and
-   data collection from multiple sources for all modules.
+    The following pre-existing modules have been ported to Python:
 
-   The following pre-existing modules have been ported to Python:
+    -   apache
+    -   cpufreq
+    -   example
+    -   exim
+    -   hddtemp
+    -   mysql
+    -   nginx
+    -   phpfm
+    -   postfix
+    -   sensors
+    -   squid
+    -   tomcat
 
-    - apache
-    - cpufreq
-    - example
-    - exim
-    - hddtemp
-    - mysql
-    - nginx
-    - phpfm
-    - postfix
-    - sensors
-    - squid
-    - tomcat
+    The following new modules have been added:
 
-   The following new modules have been added:
+    -   apache_cache
+    -   dovecot
+    -   ipfs
+    -   memcached
+    -   nginx_log
+    -   redis
 
-    - apache_cache
-    - dovecot
-    - ipfs
-    - memcached
-    - nginx_log
-    - redis
+-   other data collectors:
 
- * other data collectors:
+    -   Thanks to @simonnagl netdata now reports disk space usage.
 
-    - Thanks to @simonnagl netdata now reports disk space usage.
+-   dashboards now transfer a certain settings from server to server
+    when changing servers via the my-netdata menu.
 
- * dashboards now transfer a certain settings from server to server
-   when changing servers via the my-netdata menu.
+    The settings transferred are the dashboard theme, the online
+    help status and current pan and zoom timeframe of the dashboard.
 
-   The settings transferred are the dashboard theme, the online
-   help status and current pan and zoom timeframe of the dashboard.
+-   API improvements:
 
- * API improvements:
+    -   reduction functions now support 'min', 'sum' and 'incremental-sum'.
 
-   - reduction functions now support 'min', 'sum' and 'incremental-sum'.
+    -   netdata now offers a multi-threaded and a single threaded
+        web server (single threaded is better for IoT).
 
-   - netdata now offers a multi-threaded and a single threaded
-     web server (single threaded is better for IoT).
+-   apps.plugin improvements:
 
- * apps.plugin improvements:
+    -   can now run with command line argument 'without-files'
+        to prevent it from enumating all the open files/sockets/pipes
+        of all running processes.
 
-   - can now run with command line argument 'without-files'
-     to prevent it from enumating all the open files/sockets/pipes
-     of all running processes.
+    -   apps.plugin now scales the collected values to match the
+        the total system usage.
 
-   - apps.plugin now scales the collected values to match the
-     the total system usage.
+    -   apps.plugin can now report guest CPU usage per process.
 
-   - apps.plugin can now report guest CPU usage per process.
+    -   repeating errors are now logged once per process.
 
-   - repeating errors are now logged once per process.
+-   netdata now runs with IDLE process priority (lower than nice 19)
 
- * netdata now runs with IDLE process priority (lower than nice 19)
+-   netdata now instructs the kernel to kill it first when it starves
+    for memory.
 
- * netdata now instructs the kernel to kill it first when it starves
-   for memory.
+-   netdata listens for signals:
 
- * netdata listens for signals:
+    -   SIGHUP to netdata instructs it to re-open its log files
+        (new logrotate files added too).
 
-   - SIGHUP to netdata instructs it to re-open its log files
-     (new logrotate files added too).
+    -   SIGUSR1 to netdata saves the database
 
-   - SIGUSR1 to netdata saves the database
+    -   SIGUSR2 to netdata reloads health / alarms configuration
 
-   - SIGUSR2 to netdata reloads health / alarms configuration
+-   netdata can now bind to multiple IPs and ports.
 
- * netdata can now bind to multiple IPs and ports.
+-   netdata now has new systemd service file (it starts as user
+    netdata and does not fork).
 
- * netdata now has new systemd service file (it starts as user
-   netdata and does not fork).
+-   Dozens of other improvements and bugfixes
 
- * Dozens of other improvements and bugfixes
- 
- 
 netdata (1.2.0) - 2016-05-16
 
  At a glance:
 
- - netdata is now 30% faster
- - netdata now has a registry (my-netdata dashboard menu)
- - netdata now monitors Linux Containers (docker, lxc, etc)
+-   netdata is now 30% faster
+-   netdata now has a registry (my-netdata dashboard menu)
+-   netdata now monitors Linux Containers (docker, lxc, etc)
 
  IMPORTANT:
  This version requires libuuid. The package you need is:
 
-  - uuid-dev (debian/ubuntu), or
-  - libuuid-devel (centos/fedora/redhat)
+-   uuid-dev (debian/ubuntu), or
+-   libuuid-devel (centos/fedora/redhat)
 
  In detail:
 
- * netdata is now 30% faster !
+-   netdata is now 30% faster !
+
+    -   Patches submitted by @fredericopissarra improved overall
+        netdata performance by 10%.
+
+    -   A new improved search function in the internal indexes
+        made all searches faster by 50%, resulting in about
+        20% better performance for the core of netdata.
 
-   - Patches submitted by @fredericopissarra improved overall
-     netdata performance by 10%.
+    -   More efficient threads locking in key components
+        contributed to the overal efficiency.
 
-   - A new improved search function in the internal indexes
-     made all searches faster by 50%, resulting in about
-     20% better performance for the core of netdata.
+-   netdata now has a CENTRAL REGISTRY !
 
-   - More efficient threads locking in key components
-     contributed to the overal efficiency.
+    The central registry tracks all your netdata servers
+    and bookmarks them for you at the 'my-netdata' menu
+    on all dashboards.
 
- * netdata now has a CENTRAL REGISTRY !
+    Every netdata can act as a registry, but there is also
+    a global registry provided for free for all netdata users!
 
-   The central registry tracks all your netdata servers
-   and bookmarks them for you at the 'my-netdata' menu
-   on all dashboards.
+-   netdata now monitors CONTAINERS !
 
-   Every netdata can act as a registry, but there is also
-   a global registry provided for free for all netdata users!
+    docker, lxc, or anything else. For each container it monitors
+    CPU, RAM, DISK I/O (network interfaces were already monitored)
 
- * netdata now monitors CONTAINERS !
-   
-   docker, lxc, or anything else. For each container it monitors
-   CPU, RAM, DISK I/O (network interfaces were already monitored)
+-   apps.plugin: now uses linux capabilities by default
+    without setuid to root
 
- * apps.plugin: now uses linux capabilities by default
-   without setuid to root
+-   netdata has now an improved signal handler
+    thanks to @simonnagl
 
- * netdata has now an improved signal handler
-   thanks to @simonnagl
+-   API: new improved CORS support
 
- * API: new improved CORS support
+-   SNMP: counter64 support fixed
 
- * SNMP: counter64 support fixed
+-   MYSQL: more charts, about QCache, MyISAM key cache,
+    InnoDB buffer pools, open files
 
- * MYSQL: more charts, about QCache, MyISAM key cache,
-   InnoDB buffer pools, open files
+-   DISK charts now show mount point when available
 
- * DISK charts now show mount point when available
+-   Dashboard: improved support for older web browsers
+    and mobile web browsers (thanks to @simonnagl)
 
- * Dashboard: improved support for older web browsers
-   and mobile web browsers (thanks to @simonnagl)
+-   Multi-server dashboards now allow de-coupled refreshes for
+    each chart, so that if one netdata has a network latency
+    the other charts are not affected
 
- * Multi-server dashboards now allow de-coupled refreshes for
-   each chart, so that if one netdata has a network latency
-   the other charts are not affected
+-   Several other minor improvements and bugfixes
 
- * Several other minor improvements and bugfixes
- 
- 
 netdata (1.1.0) - 2016-04-20
 
  Dozens of commits that improve netdata in several ways:
 
- - Data collection: added IPv6 monitoring
- - Data collection: added SYNPROXY DDoS protection monitoring
- - Data collection: apps.plugin: added charts for users and user groups
- - Data collection: apps.plugin: grouping of processes now support patterns
- - Data collection: apps.plugin: now it is faster, after the new features added
- - Data collection: better auto-detection of partitions for disk monitoring
- - Data collection: better fireqos intergation for QoS monitoring
- - Data collection: squid monitoring now uses squidclient
- - Data collection: SNMP monitoring now supports 64bit counters
- - API: fixed issues in CSV output generation
- - API: netdata can now be restricted to listen on a specific IP
- - Core and apps.plugin: error log flood protection
- - Dashboard: better error handling when the netdata server is unreachable
- - Dashboard: each chart now has a toolbox
- - Dashboard: on-line help support
- - Dashboard: check for netdata updates button
- - Dashboard: added example /tv.html dashboard
- - Packaging: now compiles with musl libc (alpine linux)
- - Packaging: added debian packaging
- - Packaging: support non-root installations
- - Packaging: the installer generates uninstall script
+-   Data collection: added IPv6 monitoring
+-   Data collection: added SYNPROXY DDoS protection monitoring
+-   Data collection: apps.plugin: added charts for users and user groups
+-   Data collection: apps.plugin: grouping of processes now support patterns
+-   Data collection: apps.plugin: now it is faster, after the new features added
+-   Data collection: better auto-detection of partitions for disk monitoring
+-   Data collection: better fireqos intergation for QoS monitoring
+-   Data collection: squid monitoring now uses squidclient
+-   Data collection: SNMP monitoring now supports 64bit counters
+-   API: fixed issues in CSV output generation
+-   API: netdata can now be restricted to listen on a specific IP
+-   Core and apps.plugin: error log flood protection
+-   Dashboard: better error handling when the netdata server is unreachable
+-   Dashboard: each chart now has a toolbox
+-   Dashboard: on-line help support
+-   Dashboard: check for netdata updates button
+-   Dashboard: added example /tv.html dashboard
+-   Packaging: now compiles with musl libc (alpine linux)
+-   Packaging: added debian packaging
+-   Packaging: support non-root installations
+-   Packaging: the installer generates uninstall script
 
 netdata (1.0.0) - 2016-03-22
 
- - first public release
+-   first public release
 
 netdata (1.0.0-rc.1) - 2015-11-28
 
- - initial packaging
+-   initial packaging

+ 258 - 225
README.md

@@ -1,6 +1,6 @@
-# Netdata [![Build Status](https://travis-ci.com/netdata/netdata.svg?branch=master)](https://travis-ci.com/netdata/netdata) [![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/2231/badge)](https://bestpractices.coreinfrastructure.org/projects/2231) [![License: GPL v3+](https://img.shields.io/badge/License-GPL%20v3%2B-blue.svg)](https://www.gnu.org/licenses/gpl-3.0) [![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Freadme&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
+# Netdata [![Build Status](https://travis-ci.com/netdata/netdata.svg?branch=master)](https://travis-ci.com/netdata/netdata) [![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/2231/badge)](https://bestpractices.coreinfrastructure.org/projects/2231) [![License: GPL v3+](https://img.shields.io/badge/License-GPL%20v3%2B-blue.svg)](https://www.gnu.org/licenses/gpl-3.0) [![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Freadme&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)
 
-[![Code Climate](https://codeclimate.com/github/netdata/netdata/badges/gpa.svg)](https://codeclimate.com/github/netdata/netdata) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/a994873f30d045b9b4b83606c3eb3498)](https://www.codacy.com/app/netdata/netdata?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=netdata/netdata&amp;utm_campaign=Badge_Grade) [![LGTM C](https://img.shields.io/lgtm/grade/cpp/g/netdata/netdata.svg?logo=lgtm)](https://lgtm.com/projects/g/netdata/netdata/context:cpp) [![LGTM JS](https://img.shields.io/lgtm/grade/javascript/g/netdata/netdata.svg?logo=lgtm)](https://lgtm.com/projects/g/netdata/netdata/context:javascript) [![LGTM PYTHON](https://img.shields.io/lgtm/grade/python/g/netdata/netdata.svg?logo=lgtm)](https://lgtm.com/projects/g/netdata/netdata/context:python)
+[![Code Climate](https://codeclimate.com/github/netdata/netdata/badges/gpa.svg)](https://codeclimate.com/github/netdata/netdata) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/a994873f30d045b9b4b83606c3eb3498)](https://www.codacy.com/app/netdata/netdata?utm_source=github.com&utm_medium=referral&utm_content=netdata/netdata&utm_campaign=Badge_Grade) [![LGTM C](https://img.shields.io/lgtm/grade/cpp/g/netdata/netdata.svg?logo=lgtm)](https://lgtm.com/projects/g/netdata/netdata/context:cpp) [![LGTM JS](https://img.shields.io/lgtm/grade/javascript/g/netdata/netdata.svg?logo=lgtm)](https://lgtm.com/projects/g/netdata/netdata/context:javascript) [![LGTM PYTHON](https://img.shields.io/lgtm/grade/python/g/netdata/netdata.svg?logo=lgtm)](https://lgtm.com/projects/g/netdata/netdata/context:python)
 
 ---
 
@@ -8,7 +8,7 @@
 
 Netdata provides **unparalleled insights**, **in real-time**, of everything happening on the systems it runs (including web servers, databases, applications), using **highly interactive web dashboards**. It can run autonomously, without any third party components, or it can be integrated to existing monitoring tool chains (Prometheus, Graphite, OpenTSDB, Kafka, Grafana, etc).
 
-_Netdata is **fast** and **efficient**, designed to permanently run on all systems (**physical** & **virtual** servers, **containers**, **IoT** devices), without disrupting their core function._
+*Netdata is **fast** and **efficient**, designed to permanently run on all systems (**physical** & **virtual** servers, **containers**, **IoT** devices), without disrupting their core function.*
 
 Netdata is **free, open-source software** and it currently runs on **Linux**, **FreeBSD**, and **MacOS**.
 
@@ -23,18 +23,17 @@ Once you use it on your systems, **there is no going back**! *You have been warn
 
 [![Tweet about Netdata!](https://img.shields.io/twitter/url/http/shields.io.svg?style=social&label=Tweet%20about%20netdata)](https://twitter.com/intent/tweet?text=Netdata,%20real-time%20performance%20and%20health%20monitoring,%20done%20right!&url=https://my-netdata.io/&via=linuxnetdata&hashtags=netdata,monitoring)
 
-
 ## Contents
 
-1. [How it looks](#how-it-looks) - have a quick look at it
-2. [User base](#user-base) - who uses Netdata?
-3. [Quick Start](#quick-start) - try it now on your systems
-4. [Why Netdata](#why-netdata) - why people love Netdata, how it compares with other solutions
-5. [News](#news) - latest news about Netdata
-6. [How it works](#how-it-works) - high level diagram of how Netdata works
-7. [infographic](#infographic) - everything about Netdata, in a page
-8. [Features](#features) - what features does it have
-9. [Visualization](#visualization) - unique visualization features
+1.  [How it looks](#how-it-looks) - have a quick look at it
+2.  [User base](#user-base) - who uses Netdata?
+3.  [Quick Start](#quick-start) - try it now on your systems
+4.  [Why Netdata](#why-netdata) - why people love Netdata, how it compares with other solutions
+5.  [News](#news) - latest news about Netdata
+6.  [How it works](#how-it-works) - high level diagram of how Netdata works
+7.  [infographic](#infographic) - everything about Netdata, in a page
+8.  [Features](#features) - what features does it have
+9.  [Visualization](#visualization) - unique visualization features
 10. [What does it monitor](#what-does-it-monitor) - which metrics it collects
 11. [Documentation](#documentation) - read the docs
 12. [Community](#community) - discuss with others and get support
@@ -62,11 +61,13 @@ You will find people working for **Amazon**, **Atos**, **Baidu**, **Cisco System
 **Vimeo**, and many more!
 
 ### Docker pulls
+
 We provide docker images for the most common architectures. These are statistics reported by docker hub:
 
 [![netdata/netdata (official)](https://img.shields.io/docker/pulls/netdata/netdata.svg?label=netdata/netdata+%28official%29)](https://hub.docker.com/r/netdata/netdata/) [![firehol/netdata (deprecated)](https://img.shields.io/docker/pulls/firehol/netdata.svg?label=firehol/netdata+%28deprecated%29)](https://hub.docker.com/r/firehol/netdata/) [![titpetric/netdata (donated)](https://img.shields.io/docker/pulls/titpetric/netdata.svg?label=titpetric/netdata+%28third+party%29)](https://hub.docker.com/r/titpetric/netdata/)
 
 ### Registry
+
 When you install multiple Netdata, they are integrated into **one distributed application**, via a [Netdata registry](registry/#registry). This is a web browser feature and it allows us to count the number of unique users and unique Netdata servers installed. The following information comes from the global public Netdata registry we run:
 
 [![User Base](https://registry.my-netdata.io/api/v1/badge.svg?chart=netdata.registry_entries&dimensions=persons&label=user%20base&units=M&value_color=blue&precision=2&divide=1000000&v43)](https://registry.my-netdata.io/#menu_netdata_submenu_registry) [![Monitored Servers](https://registry.my-netdata.io/api/v1/badge.svg?chart=netdata.registry_entries&dimensions=machines&label=servers%20monitored&units=k&divide=1000&value_color=orange&precision=2&v43)](https://registry.my-netdata.io/#menu_netdata_submenu_registry) [![Sessions Served](https://registry.my-netdata.io/api/v1/badge.svg?chart=netdata.registry_sessions&label=sessions%20served&units=M&value_color=yellowgreen&precision=2&divide=1000000&v43)](https://registry.my-netdata.io/#menu_netdata_submenu_registry)
@@ -74,6 +75,7 @@ When you install multiple Netdata, they are integrated into **one distributed ap
 *in the last 24 hours:*<br/> [![New Users Today](https://registry.my-netdata.io/api/v1/badge.svg?chart=netdata.registry_entries&dimensions=persons&after=-86400&options=unaligned&group=incremental-sum&label=new%20users%20today&units=null&value_color=blue&precision=0&v42)](https://registry.my-netdata.io/#menu_netdata_submenu_registry) [![New Machines Today](https://registry.my-netdata.io/api/v1/badge.svg?chart=netdata.registry_entries&dimensions=machines&group=incremental-sum&after=-86400&options=unaligned&label=servers%20added%20today&units=null&value_color=orange&precision=0&v42)](https://registry.my-netdata.io/#menu_netdata_submenu_registry) [![Sessions Today](https://registry.my-netdata.io/api/v1/badge.svg?chart=netdata.registry_sessions&after=-86400&group=incremental-sum&options=unaligned&label=sessions%20served%20today&units=null&value_color=yellowgreen&precision=0&v42)](https://registry.my-netdata.io/#menu_netdata_submenu_registry)
 
 ## Quick Start
+
 ![](https://registry.my-netdata.io/api/v1/badge.svg?chart=web_log_nginx.requests_per_url&options=unaligned&dimensions=kickstart&group=sum&after=-3600&label=last+hour&units=installations&value_color=orange&precision=0) ![](https://registry.my-netdata.io/api/v1/badge.svg?chart=web_log_nginx.requests_per_url&options=unaligned&dimensions=kickstart&group=sum&after=-86400&label=today&units=installations&precision=0)
 
 To install Netdata from source on any Linux system (physical, virtual, container, IoT, edge) and keep it up to date with our **nightly releases** automatically, run the following:
@@ -90,14 +92,14 @@ To learn more about the pros and cons of using *nightly* vs. *stable* releases,
 
 The above command will:
 
-- Install any required packages on your system (it will ask you to confirm before doing so)
-- Compile it, install it, and start it.
+-   Install any required packages on your system (it will ask you to confirm before doing so)
+-   Compile it, install it, and start it.
 
 More installation methods and additional options can be found at the [installation page](packaging/installer/#installation).
 
 To try Netdata in a docker container, run this:
 
-```
+```sh
 docker run -d --name=netdata \
   -p 19999:19999 \
   -v /etc/passwd:/host/etc/passwd:ro \
@@ -122,25 +124,25 @@ Netdata has a quite different approach to monitoring.
 
 Netdata is a monitoring agent you install on all your systems. It is:
 
-- a **metrics collector** - for system and application metrics (including web servers, databases, containers, etc)
-- a **time-series database** - all stored in memory (does not touch the disks while it runs)
-- a **metrics visualizer** - super fast, interactive, modern, optimized for anomaly detection
-- an **alarms notification engine** - an advanced watchdog for detecting performance and availability issues
+-   a **metrics collector** - for system and application metrics (including web servers, databases, containers, etc)
+-   a **time-series database** - all stored in memory (does not touch the disks while it runs)
+-   a **metrics visualizer** - super fast, interactive, modern, optimized for anomaly detection
+-   an **alarms notification engine** - an advanced watchdog for detecting performance and availability issues
 
 All the above, are packaged together in a very flexible, extremely modular, distributed application.
 
 This is how Netdata compares to other monitoring solutions:
 
-Netdata|others (open-source and commercial)
-:---:|:---:
-**High resolution metrics** (1s granularity)|Low resolution metrics (10s granularity at best)
-Monitors everything, **thousands of metrics per node**|Monitor just a few metrics
-UI is super fast, optimized for **anomaly detection**|UI is good for just an abstract view
-**Meaningful presentation**, to help you understand the metrics|You have to know the metrics before you start
-Install and get results **immediately**|Long preparation is required to get any useful results
-Use it for **troubleshooting** performance problems|Use them to get *statistics of past performance*
-**Kills the console** for tracing performance issues|The console is always required for troubleshooting
-Requires **zero dedicated resources**|Require large dedicated resources
+|Netdata|others (open-source and commercial)|
+|:-----:|:---------------------------------:|
+|**High resolution metrics** (1s granularity)|Low resolution metrics (10s granularity at best)|
+|Monitors everything, **thousands of metrics per node**|Monitor just a few metrics|
+|UI is super fast, optimized for **anomaly detection**|UI is good for just an abstract view|
+|**Meaningful presentation**, to help you understand the metrics|You have to know the metrics before you start|
+|Install and get results **immediately**|Long preparation is required to get any useful results|
+|Use it for **troubleshooting** performance problems|Use them to get *statistics of past performance*|
+|**Kills the console** for tracing performance issues|The console is always required for troubleshooting|
+|Requires **zero dedicated resources**|Require large dedicated resources|
 
 Netdata is **open-source**, **free**, super **fast**, very **easy**, completely **open**, extremely **efficient**,
 **flexible** and integrate-able.
@@ -156,21 +158,21 @@ Release v1.16.0 contains 40 bug fixes, 31 improvements and 20 documentation upda
 
 **Binary distributions.** To improve the security, speed and reliability of new Netdata installations, we are delivering our own, industry standard installation method, with binary package distributions. The RPM binaries for the most common OSs are already available on packagecloud and we’ll have the DEB ones available very soon. All distributions are considered in Beta and, as always, we depend on our amazing community for feedback on improvements.
 
- - Our stable distributions are at [netdata/netdata @ packagecloud.io](https://packagecloud.io/netdata/netdata)
- - The nightly builds are at [netdata/netdata-edge @ packagecloud.io](https://packagecloud.io/netdata/netdata-edge)
+-   Our stable distributions are at [netdata/netdata @ packagecloud.io](https://packagecloud.io/netdata/netdata)
+-   The nightly builds are at [netdata/netdata-edge @ packagecloud.io](https://packagecloud.io/netdata/netdata-edge)
 
 **Netdata now supports TLS encryption!** You can secure the communication to the [web server](https://docs.netdata.cloud/web/server/#enabling-tls-support), the [streaming connections from slaves to the master](https://docs.netdata.cloud/streaming/#securing-the-communication) and the connection to an [openTSDB backend](https://docs.netdata.cloud/backends/opentsdb/#https). 
 
 **This version also brings two long-awaited features to Netdata’s health monitoring:**
 
- - The [health management API](https://docs.netdata.cloud/web/api/health/#health-management-api) introduced in v1.12 allowed you to easily disable alarms and/or notifications while Netdata was running. However, those changes were not persisted across Netdata restarts. Since part of routine maintenance activities may involve completely restarting a monitoring node, Netdata now saves these configurations to disk, every time you issue a command to change the silencer settings. The new [LIST command](https://docs.netdata.cloud/web/api/health/#list-silencers) of the API allows you to view at any time which alarms are currently disabled or silenced.
- - A way for Netdata to [repeatedly send alarm notifications](https://docs.netdata.cloud/health/#alarm-line-repeat) for some, or all active alarms, at a frequency of your choosing. As a result, you will no longer have to worry about missing a notification, forgetting about a raised alarm. The default is still to only send a single notification, so that existing users are not surprised by a different behavior.  
+-   The [health management API](https://docs.netdata.cloud/web/api/health/#health-management-api) introduced in v1.12 allowed you to easily disable alarms and/or notifications while Netdata was running. However, those changes were not persisted across Netdata restarts. Since part of routine maintenance activities may involve completely restarting a monitoring node, Netdata now saves these configurations to disk, every time you issue a command to change the silencer settings. The new [LIST command](https://docs.netdata.cloud/web/api/health/#list-silencers) of the API allows you to view at any time which alarms are currently disabled or silenced.
+-   A way for Netdata to [repeatedly send alarm notifications](https://docs.netdata.cloud/health/#alarm-line-repeat) for some, or all active alarms, at a frequency of your choosing. As a result, you will no longer have to worry about missing a notification, forgetting about a raised alarm. The default is still to only send a single notification, so that existing users are not surprised by a different behavior.  
 
 As always, we’ve introduced new collectors, 5 of them this time:
 
- - Of special interest to people with Windows servers in their infrastructure is the [WMI collector](https://docs.netdata.cloud/collectors/go.d.plugin/modules/wmi/), though we are fully aware that we need to continue our efforts to do a proper port to Windows. 
- - The new `perf` plugin collects system-wide CPU performance statistics from Performance Monitoring Units (PMU) using the `perf_event_open()` system call. You can read a wonderful article on why this is useful [here](http://www.brendangregg.com/blog/2017-05-09/cpu-utilization-is-wrong.html).
- - The other three are collectors to monitor [Dnsmasq DHCP leases](https://docs.netdata.cloud/collectors/go.d.plugin/modules/dnsmasq_dhcp/), [Riak KV servers](https://docs.netdata.cloud/collectors/python.d.plugin/riakkv/) and [Pihole instances](https://docs.netdata.cloud/collectors/go.d.plugin/modules/pihole/). 
+-   Of special interest to people with Windows servers in their infrastructure is the [WMI collector](https://docs.netdata.cloud/collectors/go.d.plugin/modules/wmi/), though we are fully aware that we need to continue our efforts to do a proper port to Windows. 
+-   The new `perf` plugin collects system-wide CPU performance statistics from Performance Monitoring Units (PMU) using the `perf_event_open()` system call. You can read a wonderful article on why this is useful [here](http://www.brendangregg.com/blog/2017-05-09/cpu-utilization-is-wrong.html).
+-   The other three are collectors to monitor [Dnsmasq DHCP leases](https://docs.netdata.cloud/collectors/go.d.plugin/modules/dnsmasq_dhcp/), [Riak KV servers](https://docs.netdata.cloud/collectors/python.d.plugin/riakkv/) and [Pihole instances](https://docs.netdata.cloud/collectors/go.d.plugin/modules/pihole/). 
 
 Finally, the DB Engine introduced in v1.15.0 now uses much less memory and is more robust than before. 
 
@@ -182,15 +184,15 @@ Release v1.15.0 contains 11 bug fixes and 30 improvements.
 
 We are very happy and proud to be able to include two major improvements in this release: The aggregated node view and the [new database engine](https://docs.netdata.cloud/database/engine/). 
 
-*Aggregated node view*
+_Aggregated node view_
 
 The No. 1 request from our community has been a better way to view and manage their Netdata installations, via an aggregated view. The node menu with the simple list of hosts on the agent UI just didn't do it for people with hundreds, or thousands of instances. This release introduces the node view, which uses the power of [Netdata Cloud](https://blog.netdata.cloud/posts/netdata-cloud-announcement/) to deliver powerful views of a Netdata-based monitoring infrastructure. You can read more about Netdata Cloud and the future of Netdata [here](https://blog.netdata.cloud/posts/netdata-cloud-announcement/).
 
-*New database engine*
+_New database engine_
 
 Historically, Netdata has required a lot of memory for long-term metrics storage. To mitigate this we've been building a new DB engine for several months and will continue improving until it can become the default `memory mode` for new Netdata installations. The version included in release v1.15.0 already permits longer-term storage of compressed data and we'll continue reducing the required memory in following releases.
 
-*Other major additions*
+_Other major additions_
 
 We have added support for the [AWS Kinesis backend](https://docs.netdata.cloud/backends/aws_kinesis/) and new collectors for [OpenVPN](https://docs.netdata.cloud/collectors/go.d.plugin/modules/openvpn/), the [Tengine web server](https://docs.netdata.cloud/collectors/go.d.plugin/modules/tengine/), [ScaleIO (VxFlex OS)](https://docs.netdata.cloud/collectors/go.d.plugin/modules/scaleio/), [ioping-like latency metrics](https://docs.netdata.cloud/collectors/ioping.plugin/) and [Energi Core node instances](https://docs.netdata.cloud/collectors/python.d.plugin/energid/).
 
@@ -207,7 +209,7 @@ Finally, we built a process to quickly replace any problematic nightly builds an
 Release 1.14 contains 14 bug fixes and 24 improvements.
 
 The release introduces major additions to Kubernetes monitoring, with tens of new charts for [Kubelet](https://docs.netdata.cloud/collectors/go.d.plugin/modules/k8s_kubelet/), [kube-proxy](https://docs.netdata.cloud/collectors/go.d.plugin/modules/k8s_kubeproxy/) and [coredns](https://github.com/netdata/go.d.plugin/tree/master/modules/coredns) metrics, as well as significant improvements to the Netdata [helm chart](https://github.com/netdata/helmchart/). 
- 
+
 Two new collectors were added, to monitor [Docker hub](https://docs.netdata.cloud/collectors/go.d.plugin/modules/dockerhub/) and [Docker engine](https://docs.netdata.cloud/collectors/go.d.plugin/modules/docker_engine/) metrics. 
 
 Finally, v1.14  adds support for [version 2 cgroups](https://github.com/netdata/netdata/pull/5407), [OpenLDAP over TLS](https://github.com/netdata/netdata/pull/5859), [NVIDIA SMI free and per process memory](https://github.com/netdata/netdata/pull/5796/files) and [configurable syslog facilities](https://github.com/netdata/netdata/pull/5792). 
@@ -248,23 +250,23 @@ Patch release 1.12.1 contains 22 bug fixes and 8 improvements.
 Release 1.12 is made out of 211 pull requests and 22 bug fixes.
 The key improvements are:
 
-- Introducing `netdata.cloud`, the free Netdata service for all Netdata users
-- High performance plugins with go.d.plugin (data collection orchestrator written in Go)
-- 7 new data collectors and 11 rewrites of existing data collectors for improved performance
-- A new management API for all Netdata servers
-- Bind different functions of the Netdata APIs to different ports
-- Improved installation and updates
+-   Introducing `netdata.cloud`, the free Netdata service for all Netdata users
+-   High performance plugins with go.d.plugin (data collection orchestrator written in Go)
+-   7 new data collectors and 11 rewrites of existing data collectors for improved performance
+-   A new management API for all Netdata servers
+-   Bind different functions of the Netdata APIs to different ports
+-   Improved installation and updates
 
 ---
 
 `Nov 22nd, 2018` - **[Netdata v1.11.1 released!](https://github.com/netdata/netdata/releases)**
 
-- Improved internal database to support values above 64bit.
-- New data collection plugins: [`openldap`](collectors/python.d.plugin/openldap/), [`tor`](collectors/python.d.plugin/tor/), [`nvidia_smi`](collectors/python.d.plugin/nvidia_smi/).
-- Improved data collection plugins: Netdata now supports monitoring network interface aliases, [`smartd_log`](collectors/python.d.plugin/smartd_log/), [`cpufreq`](collectors/proc.plugin/README.md#cpu-frequency), [`sensors`](collectors/python.d.plugin/sensors/).
-- Health monitoring improvements: network interface congestion alarm restored, [`alerta.io`](health/notifications/alerta/), `conntrack_max`.
-- `my-netdata`menu has been refactored.
-- Packaging: `openrc` service definition got a few improvements.
+-   Improved internal database to support values above 64bit.
+-   New data collection plugins: [`openldap`](collectors/python.d.plugin/openldap/), [`tor`](collectors/python.d.plugin/tor/), [`nvidia_smi`](collectors/python.d.plugin/nvidia_smi/).
+-   Improved data collection plugins: Netdata now supports monitoring network interface aliases, [`smartd_log`](collectors/python.d.plugin/smartd_log/), [`cpufreq`](collectors/proc.plugin/README.md#cpu-frequency), [`sensors`](collectors/python.d.plugin/sensors/).
+-   Health monitoring improvements: network interface congestion alarm restored, [`alerta.io`](health/notifications/alerta/), `conntrack_max`.
+-   `my-netdata`menu has been refactored.
+-   Packaging: `openrc` service definition got a few improvements.
 
 ---
 
@@ -282,14 +284,14 @@ Netdata is a highly efficient, highly modular, metrics management engine. Its lo
 
 This is how it works:
 
-Function|Description|Documentation
-:---:|:---|:---:
-**Collect**|Multiple independent data collection workers are collecting metrics from their sources using the optimal protocol for each application and push the metrics to the database. Each data collection worker has lockless write access to the metrics it collects.|[`collectors`](collectors/#data-collection-plugins)
-**Store**|Metrics are stored in RAM in a round robin database (ring buffer), using a custom made floating point number for minimal footprint.|[`database`](database/#database)
-**Check**|A lockless independent watchdog is evaluating **health checks** on the collected metrics, triggers alarms, maintains a health transaction log and dispatches alarm notifications.|[`health`](health/#health-monitoring)
-**Stream**|An lockless independent worker is streaming metrics, in full detail and in real-time, to remote Netdata servers, as soon as they are collected.|[`streaming`](streaming/#streaming-and-replication)
-**Archive**|A lockless independent worker is down-sampling the metrics and pushes them to **backend** time-series databases.|[`backends`](backends/)
-**Query**|Multiple independent workers are attached to the [internal web server](web/server/#web-server), servicing API requests, including [data queries](web/api/queries/#database-queries).|[`web/api`](web/api/#api)
+|Function|Description|Documentation|
+|:------:|:----------|:-----------:|
+|**Collect**|Multiple independent data collection workers are collecting metrics from their sources using the optimal protocol for each application and push the metrics to the database. Each data collection worker has lockless write access to the metrics it collects.|[`collectors`](collectors/#data-collection-plugins)|
+|**Store**|Metrics are stored in RAM in a round robin database (ring buffer), using a custom made floating point number for minimal footprint.|[`database`](database/#database)|
+|**Check**|A lockless independent watchdog is evaluating **health checks** on the collected metrics, triggers alarms, maintains a health transaction log and dispatches alarm notifications.|[`health`](health/#health-monitoring)|
+|**Stream**|An lockless independent worker is streaming metrics, in full detail and in real-time, to remote Netdata servers, as soon as they are collected.|[`streaming`](streaming/#streaming-and-replication)|
+|**Archive**|A lockless independent worker is down-sampling the metrics and pushes them to **backend** time-series databases.|[`backends`](backends/)|
+|**Query**|Multiple independent workers are attached to the [internal web server](web/server/#web-server), servicing API requests, including [data queries](web/api/queries/#database-queries).|[`web/api`](web/api/#api)|
 
 The result is a highly efficient, low latency system, supporting multiple readers and one writer on each metric.
 
@@ -300,7 +302,6 @@ Click it to to interact with it (it has direct links to documentation).
 
 [![image](https://user-images.githubusercontent.com/43294513/60951037-8ba5d180-a2f8-11e9-906e-e27356f168bc.png)](https://my-netdata.io/infographic.html)
 
-
 ## Features
 
 ![finger-video](https://user-images.githubusercontent.com/2662304/48346998-96cf3180-e685-11e8-9f4e-059d23aa3aa5.gif)
@@ -308,31 +309,34 @@ Click it to to interact with it (it has direct links to documentation).
 This is what you should expect from Netdata:
 
 ### General
-- **1s granularity** - the highest possible resolution for all metrics.
-- **Unlimited metrics** - collects all the available metrics, the more the better.
-- **1% CPU utilization of a single core** - it is super fast, unbelievably optimized.
-- **A few MB of RAM** - by default it uses 25MB RAM. [You size it](database).
-- **Zero disk I/O** - while it runs, it does not load or save anything (except `error` and `access` logs).
-- **Zero configuration** - auto-detects everything, it can collect up to 10000 metrics per server out of the box.
-- **Zero maintenance** - You just run it, it does the rest.
-- **Zero dependencies** - it is even its own web server, for its static web files and its web API (though its plugins may require additional libraries, depending on the applications monitored).
-- **Scales to infinity** - you can install it on all your servers, containers, VMs and IoTs. Metrics are not centralized by default, so there is no limit.
-- **Several operating modes** - Autonomous host monitoring (the default), headless data collector, forwarding proxy, store and forward proxy, central multi-host monitoring, in all possible configurations. Each node may have different metrics retention policy and run with or without health monitoring.
+
+-   **1s granularity** - the highest possible resolution for all metrics.
+-   **Unlimited metrics** - collects all the available metrics, the more the better.
+-   **1% CPU utilization of a single core** - it is super fast, unbelievably optimized.
+-   **A few MB of RAM** - by default it uses 25MB RAM. [You size it](database).
+-   **Zero disk I/O** - while it runs, it does not load or save anything (except `error` and `access` logs).
+-   **Zero configuration** - auto-detects everything, it can collect up to 10000 metrics per server out of the box.
+-   **Zero maintenance** - You just run it, it does the rest.
+-   **Zero dependencies** - it is even its own web server, for its static web files and its web API (though its plugins may require additional libraries, depending on the applications monitored).
+-   **Scales to infinity** - you can install it on all your servers, containers, VMs and IoTs. Metrics are not centralized by default, so there is no limit.
+-   **Several operating modes** - Autonomous host monitoring (the default), headless data collector, forwarding proxy, store and forward proxy, central multi-host monitoring, in all possible configurations. Each node may have different metrics retention policy and run with or without health monitoring.
 
 ### Health Monitoring & Alarms
-- **Sophisticated alerting** - comes with hundreds of alarms, **out of the box**! Supports dynamic thresholds, hysteresis, alarm templates, multiple role-based notification methods.
-- **Notifications**: [alerta.io](health/notifications/alerta/), [amazon sns](health/notifications/awssns/), [discordapp.com](health/notifications/discord/), [email](health/notifications/email/), [flock.com](health/notifications/flock/), [irc](health/notifications/irc/), [kavenegar.com](health/notifications/kavenegar/), [messagebird.com](health/notifications/messagebird/), [pagerduty.com](health/notifications/pagerduty/), [prowl](health/notifications/prowl/), [pushbullet.com](health/notifications/pushbullet/), [pushover.net](health/notifications/pushover/), [rocket.chat](health/notifications/rocketchat/), [slack.com](health/notifications/slack/), [smstools3](health/notifications/smstools3/), [syslog](health/notifications/syslog/), [telegram.org](health/notifications/telegram/), [twilio.com](health/notifications/twilio/), [web](health/notifications/web/) and [custom notifications](health/notifications/custom/).
+
+-   **Sophisticated alerting** - comes with hundreds of alarms, **out of the box**! Supports dynamic thresholds, hysteresis, alarm templates, multiple role-based notification methods.
+-   **Notifications**: [alerta.io](health/notifications/alerta/), [amazon sns](health/notifications/awssns/), [discordapp.com](health/notifications/discord/), [email](health/notifications/email/), [flock.com](health/notifications/flock/), [irc](health/notifications/irc/), [kavenegar.com](health/notifications/kavenegar/), [messagebird.com](health/notifications/messagebird/), [pagerduty.com](health/notifications/pagerduty/), [prowl](health/notifications/prowl/), [pushbullet.com](health/notifications/pushbullet/), [pushover.net](health/notifications/pushover/), [rocket.chat](health/notifications/rocketchat/), [slack.com](health/notifications/slack/), [smstools3](health/notifications/smstools3/), [syslog](health/notifications/syslog/), [telegram.org](health/notifications/telegram/), [twilio.com](health/notifications/twilio/), [web](health/notifications/web/) and [custom notifications](health/notifications/custom/).
 
 ### Integrations
-- **time-series dbs** - can archive its metrics to **Graphite**, **OpenTSDB**, **Prometheus**, **AWS Kinesis**, **MongoDB**, **JSON document DBs**, in the same or lower resolution (lower: to prevent it from congesting these servers due to the amount of data collected). Netdata also supports **Prometheus remote write API** which allows storing metrics to **Elasticsearch**, **Gnocchi**, **InfluxDB**, **Kafka**, **PostgreSQL/TimescaleDB**, **Splunk**, **VictoriaMetrics** and a lot of other [storage providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage).
+
+-   **time-series dbs** - can archive its metrics to **Graphite**, **OpenTSDB**, **Prometheus**, **AWS Kinesis**, **MongoDB**, **JSON document DBs**, in the same or lower resolution (lower: to prevent it from congesting these servers due to the amount of data collected). Netdata also supports **Prometheus remote write API** which allows storing metrics to **Elasticsearch**, **Gnocchi**, **InfluxDB**, **Kafka**, **PostgreSQL/TimescaleDB**, **Splunk**, **VictoriaMetrics** and a lot of other [storage providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage).
 
 ## Visualization
 
-- **Stunning interactive dashboards** - mouse, touchpad and touch-screen friendly in 2 themes: `slate` (dark) and `white`.
-- **Amazingly fast visualization** - responds to all queries in less than 1 ms per metric, even on low-end hardware.
-- **Visual anomaly detection** - the dashboards are optimized for detecting anomalies visually.
-- **Embeddable** - its charts can be embedded on your web pages, wikis and blogs. You can even use [Atlassian's Confluence as a monitoring dashboard](web/gui/confluence/).
-- **Customizable** - custom dashboards can be built using simple HTML (no javascript necessary).
+-   **Stunning interactive dashboards** - mouse, touchpad and touch-screen friendly in 2 themes: `slate` (dark) and `white`.
+-   **Amazingly fast visualization** - responds to all queries in less than 1 ms per metric, even on low-end hardware.
+-   **Visual anomaly detection** - the dashboards are optimized for detecting anomalies visually.
+-   **Embeddable** - its charts can be embedded on your web pages, wikis and blogs. You can even use [Atlassian's Confluence as a monitoring dashboard](web/gui/confluence/).
+-   **Customizable** - custom dashboards can be built using simple HTML (no javascript necessary).
 
 ### Positive and negative values
 
@@ -340,7 +344,7 @@ To improve clarity on charts, Netdata dashboards present **positive** values for
 
 ![positive-and-negative-values](https://user-images.githubusercontent.com/2662304/48309090-7c5c6180-e57a-11e8-8e03-3a7538c14223.gif)
 
-*Netdata charts showing the bandwidth and packets of a network interface. `received` is positive and `sent` is negative.*
+_Netdata charts showing the bandwidth and packets of a network interface. `received` is positive and `sent` is negative._
 
 ### Autoscaled y-axis
 
@@ -348,7 +352,7 @@ Netdata charts automatically zoom vertically, to visualize the variation of each
 
 ![non-zero-based](https://user-images.githubusercontent.com/2662304/48309139-3d2f1000-e57c-11e8-9a44-b91758134b00.gif)
 
-*A zero based `stacked` chart, automatically switches to an auto-scaled `area` chart when a single dimension is selected.*
+_A zero based `stacked` chart, automatically switches to an auto-scaled `area` chart when a single dimension is selected._
 
 ### Charts are synchronized
 
@@ -356,7 +360,7 @@ Charts on Netdata dashboards are synchronized to each other. There is no master
 
 ![charts-are-synchronized](https://user-images.githubusercontent.com/2662304/48309003-b4fb3b80-e578-11e8-86f6-f505c7059c15.gif)
 
-*Charts are panned by dragging them with the mouse. Charts can be zoomed in/out with`SHIFT` + `mouse wheel` while the mouse pointer is over a chart.*
+_Charts are panned by dragging them with the mouse. Charts can be zoomed in/out with`SHIFT` + `mouse wheel` while the mouse pointer is over a chart._
 
 > The visible time-frame (pan and zoom) is propagated from Netdata server to Netdata server, when navigating via the [node menu](registry#registry).
 
@@ -366,196 +370,225 @@ To improve visual anomaly detection across charts, the user can highlight a time
 
 ![highlighted-timeframe](https://user-images.githubusercontent.com/2662304/48311876-f9093300-e5ae-11e8-9c74-e3e291741990.gif)
 
-*A highlighted time-frame can be given by pressing `ALT` + `mouse selection` on any chart. Netdata will highlight the same range on all charts.*
+_A highlighted time-frame can be given by pressing `ALT` + `mouse selection` on any chart. Netdata will highlight the same range on all charts._
 
 > Highlighted ranges are propagated from Netdata server to Netdata server, when navigating via the [node menu](registry#registry).
 
-
 ## What does it monitor
 
 Netdata data collection is **extensible** - you can monitor anything you can get a metric for.
 Its [Plugin API](collectors/plugins.d/) supports all programing languages (anything can be a Netdata plugin, BASH, python, perl, node.js, java, Go, ruby, etc).
 
-- For better performance, most system related plugins (cpu, memory, disks, filesystems, networking, etc) have been written in `C`.
-- For faster development and easier contributions, most application related plugins (databases, web servers, etc) have been written in `python`.
+-   For better performance, most system related plugins (cpu, memory, disks, filesystems, networking, etc) have been written in `C`.
+-   For faster development and easier contributions, most application related plugins (databases, web servers, etc) have been written in `python`.
 
 #### APM (Application Performance Monitoring)
-- **[statsd](collectors/statsd.plugin/)** - Netdata is a fully featured statsd server.
-- **[Go expvar](collectors/python.d.plugin/go_expvar/)** - collects metrics exposed by applications written in the Go programming language using the expvar package.
-- **[Spring Boot](collectors/python.d.plugin/springboot/)** - monitors running Java Spring Boot applications that expose their metrics with the use of the Spring Boot Actuator included in Spring Boot library.
-- **[uWSGI](collectors/python.d.plugin/uwsgi/)** - collects performance metrics from uWSGI applications.
+
+-   **[statsd](collectors/statsd.plugin/)** - Netdata is a fully featured statsd server.
+-   **[Go expvar](collectors/python.d.plugin/go_expvar/)** - collects metrics exposed by applications written in the Go programming language using the expvar package.
+-   **[Spring Boot](collectors/python.d.plugin/springboot/)** - monitors running Java Spring Boot applications that expose their metrics with the use of the Spring Boot Actuator included in Spring Boot library.
+-   **[uWSGI](collectors/python.d.plugin/uwsgi/)** - collects performance metrics from uWSGI applications.
 
 #### System Resources
-- **[CPU Utilization](collectors/proc.plugin/)** - total and per core CPU usage.
-- **[Interrupts](collectors/proc.plugin/)** - total and per core CPU interrupts.
-- **[SoftIRQs](collectors/proc.plugin/)** - total and per core SoftIRQs.
-- **[SoftNet](collectors/proc.plugin/)** - total and per core SoftIRQs related to network activity.
-- **[CPU Throttling](collectors/proc.plugin/)** - collects per core CPU throttling.
-- **[CPU Frequency](collectors/proc.plugin/)** - collects the current CPU frequency.
-- **[CPU Idle](collectors/proc.plugin/)** - collects the time spent per processor state.
-- **[IdleJitter](collectors/idlejitter.plugin/)** - measures CPU latency.
-- **[Entropy](collectors/proc.plugin/)** - random numbers pool, using in cryptography.
-- **[Interprocess Communication - IPC](collectors/proc.plugin/)** - such as semaphores and semaphores arrays.
+
+-   **[CPU Utilization](collectors/proc.plugin/)** - total and per core CPU usage.
+-   **[Interrupts](collectors/proc.plugin/)** - total and per core CPU interrupts.
+-   **[SoftIRQs](collectors/proc.plugin/)** - total and per core SoftIRQs.
+-   **[SoftNet](collectors/proc.plugin/)** - total and per core SoftIRQs related to network activity.
+-   **[CPU Throttling](collectors/proc.plugin/)** - collects per core CPU throttling.
+-   **[CPU Frequency](collectors/proc.plugin/)** - collects the current CPU frequency.
+-   **[CPU Idle](collectors/proc.plugin/)** - collects the time spent per processor state.
+-   **[IdleJitter](collectors/idlejitter.plugin/)** - measures CPU latency.
+-   **[Entropy](collectors/proc.plugin/)** - random numbers pool, using in cryptography.
+-   **[Interprocess Communication - IPC](collectors/proc.plugin/)** - such as semaphores and semaphores arrays.
 
 #### Memory
-- **[ram](collectors/proc.plugin/)** - collects info about RAM usage.
-- **[swap](collectors/proc.plugin/)** - collects info about swap memory usage.
-- **[available memory](collectors/proc.plugin/)** - collects the amount of RAM available for userspace processes.
-- **[committed memory](collectors/proc.plugin/)** - collects the amount of RAM committed to userspace processes.
-- **[Page Faults](collectors/proc.plugin/)** - collects the system page faults (major and minor).
-- **[writeback memory](collectors/proc.plugin/)** - collects the system dirty memory and writeback activity.
-- **[huge pages](collectors/proc.plugin/)** - collects the amount of RAM used for huge pages.
-- **[KSM](collectors/proc.plugin/)** - collects info about Kernel Same Merging (memory dedupper).
-- **[Numa](collectors/proc.plugin/)** - collects Numa info on systems that support it.
-- **[slab](collectors/proc.plugin/)** - collects info about the Linux kernel memory usage.
+
+-   **[ram](collectors/proc.plugin/)** - collects info about RAM usage.
+-   **[swap](collectors/proc.plugin/)** - collects info about swap memory usage.
+-   **[available memory](collectors/proc.plugin/)** - collects the amount of RAM available for userspace processes.
+-   **[committed memory](collectors/proc.plugin/)** - collects the amount of RAM committed to userspace processes.
+-   **[Page Faults](collectors/proc.plugin/)** - collects the system page faults (major and minor).
+-   **[writeback memory](collectors/proc.plugin/)** - collects the system dirty memory and writeback activity.
+-   **[huge pages](collectors/proc.plugin/)** - collects the amount of RAM used for huge pages.
+-   **[KSM](collectors/proc.plugin/)** - collects info about Kernel Same Merging (memory dedupper).
+-   **[Numa](collectors/proc.plugin/)** - collects Numa info on systems that support it.
+-   **[slab](collectors/proc.plugin/)** - collects info about the Linux kernel memory usage.
 
 #### Disks
-- **[block devices](collectors/proc.plugin/)** - per disk: I/O, operations, backlog, utilization, space, etc.
-- **[BCACHE](collectors/proc.plugin/)** - detailed performance of SSD caching devices.
-- **[DiskSpace](collectors/proc.plugin/)** - monitors disk space usage.
-- **[mdstat](collectors/proc.plugin/)** - software RAID.
-- **[hddtemp](collectors/python.d.plugin/hddtemp/)** - disk temperatures.
-- **[smartd](collectors/python.d.plugin/smartd_log/)** - disk S.M.A.R.T. values.
-- **[device mapper](collectors/proc.plugin/)** - naming disks.
-- **[Veritas Volume Manager](collectors/proc.plugin/)** - naming disks.
-- **[megacli](collectors/python.d.plugin/megacli/)** - adapter, physical drives and battery stats.
-- **[adaptec_raid](collectors/python.d.plugin/adaptec_raid/)** -  logical and physical devices health metrics.
-- **[ioping](collectors/ioping.plugin/)** - to measure disk read/write latency.
+
+-   **[block devices](collectors/proc.plugin/)** - per disk: I/O, operations, backlog, utilization, space, etc.
+-   **[BCACHE](collectors/proc.plugin/)** - detailed performance of SSD caching devices.
+-   **[DiskSpace](collectors/proc.plugin/)** - monitors disk space usage.
+-   **[mdstat](collectors/proc.plugin/)** - software RAID.
+-   **[hddtemp](collectors/python.d.plugin/hddtemp/)** - disk temperatures.
+-   **[smartd](collectors/python.d.plugin/smartd_log/)** - disk S.M.A.R.T. values.
+-   **[device mapper](collectors/proc.plugin/)** - naming disks.
+-   **[Veritas Volume Manager](collectors/proc.plugin/)** - naming disks.
+-   **[megacli](collectors/python.d.plugin/megacli/)** - adapter, physical drives and battery stats.
+-   **[adaptec_raid](collectors/python.d.plugin/adaptec_raid/)** -  logical and physical devices health metrics.
+-   **[ioping](collectors/ioping.plugin/)** - to measure disk read/write latency.
 
 #### Filesystems
-- **[BTRFS](collectors/proc.plugin/)** - detailed disk space allocation and usage.
-- **[Ceph](collectors/python.d.plugin/ceph/)** - OSD usage, Pool usage, number of objects, etc.
-- **[NFS file servers and clients](collectors/proc.plugin/)** - NFS v2, v3, v4: I/O, cache, read ahead, RPC calls
-- **[Samba](collectors/python.d.plugin/samba/)** - performance metrics of Samba SMB2 file sharing.
-- **[ZFS](collectors/proc.plugin/)** - detailed performance and resource usage.
+
+-   **[BTRFS](collectors/proc.plugin/)** - detailed disk space allocation and usage.
+-   **[Ceph](collectors/python.d.plugin/ceph/)** - OSD usage, Pool usage, number of objects, etc.
+-   **[NFS file servers and clients](collectors/proc.plugin/)** - NFS v2, v3, v4: I/O, cache, read ahead, RPC calls
+-   **[Samba](collectors/python.d.plugin/samba/)** - performance metrics of Samba SMB2 file sharing.
+-   **[ZFS](collectors/proc.plugin/)** - detailed performance and resource usage.
 
 #### Networking
-- **[Network Stack](collectors/proc.plugin/)** - everything about the networking stack (both IPv4 and IPv6 for all protocols: TCP, UDP, SCTP, UDPLite, ICMP, Multicast, Broadcast, etc), and all network interfaces (per interface: bandwidth, packets, errors, drops).
-- **[Netfilter](collectors/proc.plugin/)** - everything about the netfilter connection tracker.
-- **[SynProxy](collectors/proc.plugin/)** - collects performance data about the linux SYNPROXY (DDoS).
-- **[NFacct](collectors/nfacct.plugin/)** - collects accounting data from iptables.
-- **[Network QoS](collectors/tc.plugin/)** - the only tool that visualizes network `tc` classes in real-time.
-- **[FPing](collectors/fping.plugin/)** - to measure latency and packet loss between any number of hosts.
-- **[ISC dhcpd](collectors/python.d.plugin/isc_dhcpd/)** - pools utilization, leases, etc.
-- **[AP](collectors/charts.d.plugin/ap/)** - collects Linux access point performance data (`hostapd`).
-- **[SNMP](collectors/node.d.plugin/snmp/)** - SNMP devices can be monitored too (although you will need to configure these).
-- **[port_check](collectors/python.d.plugin/portcheck/)** - checks TCP ports for availability and response time.
+
+-   **[Network Stack](collectors/proc.plugin/)** - everything about the networking stack (both IPv4 and IPv6 for all protocols: TCP, UDP, SCTP, UDPLite, ICMP, Multicast, Broadcast, etc), and all network interfaces (per interface: bandwidth, packets, errors, drops).
+-   **[Netfilter](collectors/proc.plugin/)** - everything about the netfilter connection tracker.
+-   **[SynProxy](collectors/proc.plugin/)** - collects performance data about the linux SYNPROXY (DDoS).
+-   **[NFacct](collectors/nfacct.plugin/)** - collects accounting data from iptables.
+-   **[Network QoS](collectors/tc.plugin/)** - the only tool that visualizes network `tc` classes in real-time.
+-   **[FPing](collectors/fping.plugin/)** - to measure latency and packet loss between any number of hosts.
+-   **[ISC dhcpd](collectors/python.d.plugin/isc_dhcpd/)** - pools utilization, leases, etc.
+-   **[AP](collectors/charts.d.plugin/ap/)** - collects Linux access point performance data (`hostapd`).
+-   **[SNMP](collectors/node.d.plugin/snmp/)** - SNMP devices can be monitored too (although you will need to configure these).
+-   **[port_check](collectors/python.d.plugin/portcheck/)** - checks TCP ports for availability and response time.
 
 #### Virtual Private Networks
-- **[OpenVPN](collectors/python.d.plugin/ovpn_status_log/)** - collects status per tunnel.
-- **[LibreSwan](collectors/charts.d.plugin/libreswan/)** - collects metrics per IPSEC tunnel.
-- **[Tor](collectors/python.d.plugin/tor/)** - collects Tor traffic statistics.
+
+-   **[OpenVPN](collectors/python.d.plugin/ovpn_status_log/)** - collects status per tunnel.
+-   **[LibreSwan](collectors/charts.d.plugin/libreswan/)** - collects metrics per IPSEC tunnel.
+-   **[Tor](collectors/python.d.plugin/tor/)** - collects Tor traffic statistics.
 
 #### Processes
-- **[System Processes](collectors/proc.plugin/)** - running, blocked, forks, active.
-- **[Applications](collectors/apps.plugin/)** - by grouping the process tree and reporting CPU, memory, disk reads, disk writes, swap, threads, pipes, sockets - per process group.
-- **[systemd](collectors/cgroups.plugin/)** - monitors systemd services using CGROUPS.
+
+-   **[System Processes](collectors/proc.plugin/)** - running, blocked, forks, active.
+-   **[Applications](collectors/apps.plugin/)** - by grouping the process tree and reporting CPU, memory, disk reads, disk writes, swap, threads, pipes, sockets - per process group.
+-   **[systemd](collectors/cgroups.plugin/)** - monitors systemd services using CGROUPS.
 
 #### Users
-- **[Users and User Groups resource usage](collectors/apps.plugin/)** - by summarizing the process tree per user and group, reporting: CPU, memory, disk reads, disk writes, swap, threads, pipes, sockets.
-- **[logind](collectors/python.d.plugin/logind/)** - collects sessions, users and seats connected.
+
+-   **[Users and User Groups resource usage](collectors/apps.plugin/)** - by summarizing the process tree per user and group, reporting: CPU, memory, disk reads, disk writes, swap, threads, pipes, sockets.
+-   **[logind](collectors/python.d.plugin/logind/)** - collects sessions, users and seats connected.
 
 #### Containers and VMs
-- **[Containers](collectors/cgroups.plugin/)** - collects resource usage for all kinds of containers, using CGROUPS (systemd-nspawn, lxc, lxd, docker, kubernetes, etc).
-- **[libvirt VMs](collectors/cgroups.plugin/)** - collects resource usage for all kinds of VMs, using CGROUPS.
-- **[dockerd](collectors/python.d.plugin/dockerd/)** - collects docker health metrics.
+
+-   **[Containers](collectors/cgroups.plugin/)** - collects resource usage for all kinds of containers, using CGROUPS (systemd-nspawn, lxc, lxd, docker, kubernetes, etc).
+-   **[libvirt VMs](collectors/cgroups.plugin/)** - collects resource usage for all kinds of VMs, using CGROUPS.
+-   **[dockerd](collectors/python.d.plugin/dockerd/)** - collects docker health metrics.
 
 #### Web Servers
-- **[Apache and lighttpd](collectors/python.d.plugin/apache/)** - `mod-status` (v2.2, v2.4) and cache log statistics, for multiple servers.
-- **[IPFS](collectors/python.d.plugin/ipfs/)** - bandwidth, peers.
-- **[LiteSpeed](collectors/python.d.plugin/litespeed/)** - reads the litespeed rtreport files to collect metrics.
-- **[Nginx](collectors/python.d.plugin/nginx/)** - `stub-status`, for multiple servers.
-- **[Nginx+](collectors/python.d.plugin/nginx_plus/)** - connects to multiple nginx_plus servers (local or remote) to collect real-time performance metrics.
-- **[PHP-FPM](collectors/python.d.plugin/phpfpm/)** - multiple instances, each reporting connections, requests, performance, etc.
-- **[Tomcat](collectors/python.d.plugin/tomcat/)** - accesses, threads, free memory, volume, etc.
-- **[web server `access.log` files](collectors/python.d.plugin/web_log/)** - extracting in real-time, web server and proxy performance metrics and applying several health checks, etc.
-- **[HTTP check](collectors/python.d.plugin/httpcheck/)** - checks one or more web servers for HTTP status code and returned content.
+
+-   **[Apache and lighttpd](collectors/python.d.plugin/apache/)** - `mod-status` (v2.2, v2.4) and cache log statistics, for multiple servers.
+-   **[IPFS](collectors/python.d.plugin/ipfs/)** - bandwidth, peers.
+-   **[LiteSpeed](collectors/python.d.plugin/litespeed/)** - reads the litespeed rtreport files to collect metrics.
+-   **[Nginx](collectors/python.d.plugin/nginx/)** - `stub-status`, for multiple servers.
+-   **[Nginx+](collectors/python.d.plugin/nginx_plus/)** - connects to multiple nginx_plus servers (local or remote) to collect real-time performance metrics.
+-   **[PHP-FPM](collectors/python.d.plugin/phpfpm/)** - multiple instances, each reporting connections, requests, performance, etc.
+-   **[Tomcat](collectors/python.d.plugin/tomcat/)** - accesses, threads, free memory, volume, etc.
+-   **[web server `access.log` files](collectors/python.d.plugin/web_log/)** - extracting in real-time, web server and proxy performance metrics and applying several health checks, etc.
+-   **[HTTP check](collectors/python.d.plugin/httpcheck/)** - checks one or more web servers for HTTP status code and returned content.
 
 #### Proxies, Balancers, Accelerators
-- **[HAproxy](collectors/python.d.plugin/haproxy/)** - bandwidth, sessions, backends, etc.
-- **[Squid](collectors/python.d.plugin/squid/)** - multiple servers, each showing: clients bandwidth and requests, servers bandwidth and requests.
-- **[Traefik](collectors/python.d.plugin/traefik/)** - connects to multiple traefik instances (local or remote) to collect API metrics (response status code, response time, average response time and server uptime).
-- **[Varnish](collectors/python.d.plugin/varnish/)** - threads, sessions, hits, objects, backends, etc.
-- **[IPVS](collectors/proc.plugin/)** - collects metrics from the Linux IPVS load balancer.
+
+-   **[HAproxy](collectors/python.d.plugin/haproxy/)** - bandwidth, sessions, backends, etc.
+-   **[Squid](collectors/python.d.plugin/squid/)** - multiple servers, each showing: clients bandwidth and requests, servers bandwidth and requests.
+-   **[Traefik](collectors/python.d.plugin/traefik/)** - connects to multiple traefik instances (local or remote) to collect API metrics (response status code, response time, average response time and server uptime).
+-   **[Varnish](collectors/python.d.plugin/varnish/)** - threads, sessions, hits, objects, backends, etc.
+-   **[IPVS](collectors/proc.plugin/)** - collects metrics from the Linux IPVS load balancer.
 
 #### Database Servers
-- **[CouchDB](collectors/python.d.plugin/couchdb/)** - reads/writes, request methods, status codes, tasks, replication, per-db, etc.
-- **[MemCached](collectors/python.d.plugin/memcached/)** - multiple servers, each showing: bandwidth, connections, items, etc.
-- **[MongoDB](collectors/python.d.plugin/mongodb/)** - operations, clients, transactions, cursors, connections, asserts, locks, etc.
-- **[MySQL and mariadb](collectors/python.d.plugin/mysql/)** - multiple servers, each showing: bandwidth, queries/s, handlers, locks, issues, tmp operations, connections, binlog metrics, threads, innodb metrics, and more.
-- **[PostgreSQL](collectors/python.d.plugin/postgres/)** - multiple servers, each showing: per database statistics (connections, tuples read - written - returned, transactions, locks), backend processes, indexes, tables, write ahead, background writer and more.
-- **[Proxy SQL](collectors/python.d.plugin/proxysql/)** - collects Proxy SQL backend and frontend performance metrics.
-- **[Redis](collectors/python.d.plugin/redis/)** - multiple servers, each showing: operations, hit rate, memory, keys, clients, slaves.
-- **[RethinkDB](collectors/python.d.plugin/rethinkdbs/)** - connects to multiple rethinkdb servers (local or remote) to collect real-time metrics.
+
+-   **[CouchDB](collectors/python.d.plugin/couchdb/)** - reads/writes, request methods, status codes, tasks, replication, per-db, etc.
+-   **[MemCached](collectors/python.d.plugin/memcached/)** - multiple servers, each showing: bandwidth, connections, items, etc.
+-   **[MongoDB](collectors/python.d.plugin/mongodb/)** - operations, clients, transactions, cursors, connections, asserts, locks, etc.
+-   **[MySQL and mariadb](collectors/python.d.plugin/mysql/)** - multiple servers, each showing: bandwidth, queries/s, handlers, locks, issues, tmp operations, connections, binlog metrics, threads, innodb metrics, and more.
+-   **[PostgreSQL](collectors/python.d.plugin/postgres/)** - multiple servers, each showing: per database statistics (connections, tuples read - written - returned, transactions, locks), backend processes, indexes, tables, write ahead, background writer and more.
+-   **[Proxy SQL](collectors/python.d.plugin/proxysql/)** - collects Proxy SQL backend and frontend performance metrics.
+-   **[Redis](collectors/python.d.plugin/redis/)** - multiple servers, each showing: operations, hit rate, memory, keys, clients, slaves.
+-   **[RethinkDB](collectors/python.d.plugin/rethinkdbs/)** - connects to multiple rethinkdb servers (local or remote) to collect real-time metrics.
 
 #### Message Brokers
-- **[beanstalkd](collectors/python.d.plugin/beanstalk/)** - global and per tube monitoring.
-- **[RabbitMQ](collectors/python.d.plugin/rabbitmq/)** - performance and health metrics.
+
+-   **[beanstalkd](collectors/python.d.plugin/beanstalk/)** - global and per tube monitoring.
+-   **[RabbitMQ](collectors/python.d.plugin/rabbitmq/)** - performance and health metrics.
 
 #### Search and Indexing
-- **[ElasticSearch](collectors/python.d.plugin/elasticsearch/)** - search and index performance, latency, timings, cluster statistics, threads statistics, etc.
+
+-   **[ElasticSearch](collectors/python.d.plugin/elasticsearch/)** - search and index performance, latency, timings, cluster statistics, threads statistics, etc.
 
 #### DNS Servers
-- **[bind_rndc](collectors/python.d.plugin/bind_rndc/)** - parses `named.stats` dump file to collect real-time performance metrics. All versions of bind after 9.6 are supported.
-- **[dnsdist](collectors/python.d.plugin/dnsdist/)** - performance and health metrics.
-- **[ISC Bind (named)](collectors/node.d.plugin/named/)** - multiple servers, each showing: clients, requests, queries, updates, failures and several per view metrics. All versions of bind after 9.9.10 are supported.
-- **[NSD](collectors/python.d.plugin/nsd/)** - queries, zones, protocols, query types, transfers, etc.
-- **[PowerDNS](collectors/python.d.plugin/powerdns/)** - queries, answers, cache, latency, etc.
-- **[unbound](collectors/python.d.plugin/unbound/)** - performance and resource usage metrics.
-- **[dns_query_time](collectors/python.d.plugin/dns_query_time/)** - DNS query time statistics.
+
+-   **[bind_rndc](collectors/python.d.plugin/bind_rndc/)** - parses `named.stats` dump file to collect real-time performance metrics. All versions of bind after 9.6 are supported.
+-   **[dnsdist](collectors/python.d.plugin/dnsdist/)** - performance and health metrics.
+-   **[ISC Bind (named)](collectors/node.d.plugin/named/)** - multiple servers, each showing: clients, requests, queries, updates, failures and several per view metrics. All versions of bind after 9.9.10 are supported.
+-   **[NSD](collectors/python.d.plugin/nsd/)** - queries, zones, protocols, query types, transfers, etc.
+-   **[PowerDNS](collectors/python.d.plugin/powerdns/)** - queries, answers, cache, latency, etc.
+-   **[unbound](collectors/python.d.plugin/unbound/)** - performance and resource usage metrics.
+-   **[dns_query_time](collectors/python.d.plugin/dns_query_time/)** - DNS query time statistics.
 
 #### Time Servers
-- **[chrony](collectors/python.d.plugin/chrony/)** - uses the `chronyc` command to collect chrony statistics (Frequency, Last offset, RMS offset, Residual freq, Root delay, Root dispersion, Skew, System time).
-- **[ntpd](collectors/python.d.plugin/ntpd/)** - connects to multiple ntpd servers (local or remote) to provide statistics of system variables and optional also peer variables.
+
+-   **[chrony](collectors/python.d.plugin/chrony/)** - uses the `chronyc` command to collect chrony statistics (Frequency, Last offset, RMS offset, Residual freq, Root delay, Root dispersion, Skew, System time).
+-   **[ntpd](collectors/python.d.plugin/ntpd/)** - connects to multiple ntpd servers (local or remote) to provide statistics of system variables and optional also peer variables.
 
 #### Mail Servers
-- **[Dovecot](collectors/python.d.plugin/dovecot/)** - POP3/IMAP servers.
-- **[Exim](collectors/python.d.plugin/exim/)** - message queue (emails queued).
-- **[Postfix](collectors/python.d.plugin/postfix/)** - message queue (entries, size).
+
+-   **[Dovecot](collectors/python.d.plugin/dovecot/)** - POP3/IMAP servers.
+-   **[Exim](collectors/python.d.plugin/exim/)** - message queue (emails queued).
+-   **[Postfix](collectors/python.d.plugin/postfix/)** - message queue (entries, size).
 
 #### Hardware Sensors
-- **[IPMI](collectors/freeipmi.plugin/)** - enterprise hardware sensors and events.
-- **[lm-sensors](collectors/python.d.plugin/sensors/)** - temperature, voltage, fans, power, humidity, etc.
-- **[Nvidia](collectors/python.d.plugin/nvidia_smi/)** - collects information for Nvidia GPUs.
-- **[RPi](collectors/charts.d.plugin/sensors/)** - Raspberry Pi temperature sensors.
-- **[w1sensor](collectors/python.d.plugin/w1sensor/)** - collects data from connected 1-Wire sensors.
+
+-   **[IPMI](collectors/freeipmi.plugin/)** - enterprise hardware sensors and events.
+-   **[lm-sensors](collectors/python.d.plugin/sensors/)** - temperature, voltage, fans, power, humidity, etc.
+-   **[Nvidia](collectors/python.d.plugin/nvidia_smi/)** - collects information for Nvidia GPUs.
+-   **[RPi](collectors/charts.d.plugin/sensors/)** - Raspberry Pi temperature sensors.
+-   **[w1sensor](collectors/python.d.plugin/w1sensor/)** - collects data from connected 1-Wire sensors.
 
 #### UPSes
-- **[apcupsd](collectors/charts.d.plugin/apcupsd/)** - load, charge, battery voltage, temperature, utility metrics, output metrics.
-- **[NUT](collectors/charts.d.plugin/nut/)** - load, charge, battery voltage, temperature, utility metrics, output metrics.
-- **[Linux Power Supply](collectors/proc.plugin/)** - collects metrics reported by power supply drivers on Linux.
+
+-   **[apcupsd](collectors/charts.d.plugin/apcupsd/)** - load, charge, battery voltage, temperature, utility metrics, output metrics.
+-   **[NUT](collectors/charts.d.plugin/nut/)** - load, charge, battery voltage, temperature, utility metrics, output metrics.
+-   **[Linux Power Supply](collectors/proc.plugin/)** - collects metrics reported by power supply drivers on Linux.
 
 #### Social Sharing Servers
-- **[RetroShare](collectors/python.d.plugin/retroshare/)** - connects to multiple retroshare servers (local or remote) to collect real-time performance metrics.
+
+-   **[RetroShare](collectors/python.d.plugin/retroshare/)** - connects to multiple retroshare servers (local or remote) to collect real-time performance metrics.
 
 #### Security
-- **[Fail2Ban](collectors/python.d.plugin/fail2ban/)** - monitors the fail2ban log file to check all bans for all active jails.
+
+-   **[Fail2Ban](collectors/python.d.plugin/fail2ban/)** - monitors the fail2ban log file to check all bans for all active jails.
 
 #### Authentication, Authorization, Accounting (AAA, RADIUS, LDAP) Servers
-- **[FreeRadius](collectors/python.d.plugin/freeradius/)** - uses the `radclient` command to provide freeradius statistics (authentication, accounting, proxy-authentication, proxy-accounting).
+
+-   **[FreeRadius](collectors/python.d.plugin/freeradius/)** - uses the `radclient` command to provide freeradius statistics (authentication, accounting, proxy-authentication, proxy-accounting).
 
 #### Telephony Servers
-- **[opensips](collectors/charts.d.plugin/opensips/)** - connects to an opensips server (localhost only) to collect real-time performance metrics.
+
+-   **[opensips](collectors/charts.d.plugin/opensips/)** - connects to an opensips server (localhost only) to collect real-time performance metrics.
 
 #### Household Appliances
-- **[SMA webbox](collectors/node.d.plugin/sma_webbox/)** - connects to multiple remote SMA webboxes to collect real-time performance metrics of the photovoltaic (solar) power generation.
-- **[Fronius](collectors/node.d.plugin/fronius/)** - connects to multiple remote Fronius Symo servers to collect real-time performance metrics of the photovoltaic (solar) power generation.
-- **[StiebelEltron](collectors/node.d.plugin/stiebeleltron/)** - collects the temperatures and other metrics from your Stiebel Eltron heating system using their Internet Service Gateway (ISG web).
+
+-   **[SMA webbox](collectors/node.d.plugin/sma_webbox/)** - connects to multiple remote SMA webboxes to collect real-time performance metrics of the photovoltaic (solar) power generation.
+-   **[Fronius](collectors/node.d.plugin/fronius/)** - connects to multiple remote Fronius Symo servers to collect real-time performance metrics of the photovoltaic (solar) power generation.
+-   **[StiebelEltron](collectors/node.d.plugin/stiebeleltron/)** - collects the temperatures and other metrics from your Stiebel Eltron heating system using their Internet Service Gateway (ISG web).
 
 #### Game Servers
-- **[SpigotMC](collectors/python.d.plugin/spigotmc/)** - monitors Spigot Minecraft server ticks per second and number of online players using the Minecraft remote console.
+
+-   **[SpigotMC](collectors/python.d.plugin/spigotmc/)** - monitors Spigot Minecraft server ticks per second and number of online players using the Minecraft remote console.
 
 #### Distributed Computing
-- **[BOINC](collectors/python.d.plugin/boinc/)** - monitors task states for local and remote BOINC client software using the remote GUI RPC interface. Also provides alarms for a handful of error conditions.
+
+-   **[BOINC](collectors/python.d.plugin/boinc/)** - monitors task states for local and remote BOINC client software using the remote GUI RPC interface. Also provides alarms for a handful of error conditions.
 
 #### Media Streaming Servers
-- **[IceCast](collectors/python.d.plugin/icecast/)** - collects the number of listeners for active sources.
+
+-   **[IceCast](collectors/python.d.plugin/icecast/)** - collects the number of listeners for active sources.
 
 ### Monitoring Systems
-- **[Monit](collectors/python.d.plugin/monit/)** - collects metrics about monit targets (filesystems, applications, networks).
+
+-   **[Monit](collectors/python.d.plugin/monit/)** - collects metrics about monit targets (filesystems, applications, networks).
 
 #### Provisioning Systems
-- **[Puppet](collectors/python.d.plugin/puppet/)** - connects to multiple Puppet Server and Puppet DB instances (local or remote) to collect real-time status metrics.
+
+-   **[Puppet](collectors/python.d.plugin/puppet/)** - connects to multiple Puppet Server and Puppet DB instances (local or remote) to collect real-time status metrics.
 
 You can easily extend Netdata, by writing plugins that collect data from any source, using any computer language.
 
@@ -563,23 +596,23 @@ You can easily extend Netdata, by writing plugins that collect data from any sou
 
 ## Documentation
 
-The Netdata documentation is at [https://docs.netdata.cloud](https://docs.netdata.cloud). But you can also find it inside the repo, so by just navigating the repo on github you can find all the documentation.
+The Netdata documentation is at <https://docs.netdata.cloud>. But you can also find it inside the repo, so by just navigating the repo on github you can find all the documentation.
 
 Here is a quick list:
 
-Directory|Description
-:---|:---
-[`installer`](packaging/installer/)|Instructions to install Netdata on your systems.
-[`docker`](packaging/docker/)|Instructions to install Netdata using docker.
-[`daemon`](daemon/)|Information about the Netdata daemon and its configuration.
-[`collectors`](collectors/)|Information about data collection plugins.
-[`health`](health/)|How Netdata's health monitoring works, how to create your own alarms and how to configure alarm notification methods.
-[`streaming`](streaming/)|How to build hierarchies of Netdata servers, by streaming metrics between them.
-[`backends`](backends/)|Long term archiving of metrics to industry standard time-series databases, like `prometheus`, `graphite`, `opentsdb`.
-[`web/api`](web/api/)|Learn how to query the Netdata API and the queries it supports.
-[`web/api/badges`](web/api/badges/)|Learn how to generate badges (SVG images) from live data.
-[`web/gui/custom`](web/gui/custom/)|Learn how to create custom Netdata dashboards.
-[`web/gui/confluence`](web/gui/confluence/)|Learn how to create Netdata dashboards on Atlassian's Confluence.
+|Directory|Description|
+|:--------|:----------|
+|[`installer`](packaging/installer/)|Instructions to install Netdata on your systems.|
+|[`docker`](packaging/docker/)|Instructions to install Netdata using docker.|
+|[`daemon`](daemon/)|Information about the Netdata daemon and its configuration.|
+|[`collectors`](collectors/)|Information about data collection plugins.|
+|[`health`](health/)|How Netdata's health monitoring works, how to create your own alarms and how to configure alarm notification methods.|
+|[`streaming`](streaming/)|How to build hierarchies of Netdata servers, by streaming metrics between them.|
+|[`backends`](backends/)|Long term archiving of metrics to industry standard time-series databases, like `prometheus`, `graphite`, `opentsdb`.|
+|[`web/api`](web/api/)|Learn how to query the Netdata API and the queries it supports.|
+|[`web/api/badges`](web/api/badges/)|Learn how to generate badges (SVG images) from live data.|
+|[`web/gui/custom`](web/gui/custom/)|Learn how to create custom Netdata dashboards.|
+|[`web/gui/confluence`](web/gui/confluence/)|Learn how to create Netdata dashboards on Atlassian's Confluence.|
 
 You can also check all the other directories. Most of them have plenty of documentation.
 
@@ -591,11 +624,11 @@ To report bugs, or get help, use [GitHub Issues](https://github.com/netdata/netd
 
 You can also find Netdata on:
 
-- [Facebook](https://www.facebook.com/linuxnetdata/)
-- [Twitter](https://twitter.com/linuxnetdata)
-- [OpenHub](https://www.openhub.net/p/netdata)
-- [Repology](https://repology.org/metapackage/netdata/versions)
-- [StackShare](https://stackshare.io/netdata)
+-   [Facebook](https://www.facebook.com/linuxnetdata/)
+-   [Twitter](https://twitter.com/linuxnetdata)
+-   [OpenHub](https://www.openhub.net/p/netdata)
+-   [Repology](https://repology.org/metapackage/netdata/versions)
+-   [StackShare](https://stackshare.io/netdata)
 
 ## License
 
@@ -607,7 +640,7 @@ Netdata re-distributes other open-source tools and libraries. Please check the [
 
 Yes.
 
-*When people first hear about a new product, they frequently ask if it is any good. A Hacker News user [remarked](https://news.ycombinator.com/item?id=3067434):*
+_When people first hear about a new product, they frequently ask if it is any good. A Hacker News user [remarked](https://news.ycombinator.com/item?id=3067434):_
 
 > Note to self: Starting immediately, all raganwald projects will have a “Is it any good?” section in the readme, and the answer shall be “yes.".
 

+ 97 - 126
REDISTRIBUTED.md

@@ -13,191 +13,162 @@ We have decided to redistribute all these, instead of using them
 through a CDN, to allow Netdata to work in cases where Internet
 connectivity is not available.
 
-- [Dygraphs](http://dygraphs.com/)
+-   [Dygraphs](http://dygraphs.com/)
 
-    Copyright 2009, Dan Vanderkam
-    [MIT License](http://dygraphs.com/legal.html)
+      Copyright 2009, Dan Vanderkam
+      [MIT License](http://dygraphs.com/legal.html)
 
+-   [Easy Pie Chart](https://rendro.github.io/easy-pie-chart/)
 
-- [Easy Pie Chart](https://rendro.github.io/easy-pie-chart/)
+      Copyright 2013, Robert Fleischmann
+      [MIT License](https://github.com/rendro/easy-pie-chart/blob/master/LICENSE)
 
-    Copyright 2013, Robert Fleischmann
-    [MIT License](https://github.com/rendro/easy-pie-chart/blob/master/LICENSE)
+-   [Gauge.js](http://bernii.github.io/gauge.js/)
 
+      Copyright, Bernard Kobos
+      [MIT License](https://github.com/getgauge/gauge-js/blob/master/LICENSE)
 
-- [Gauge.js](http://bernii.github.io/gauge.js/)
+-   [d3pie](https://github.com/benkeen/d3pie)
 
-    Copyright, Bernard Kobos
-    [MIT License](https://github.com/getgauge/gauge-js/blob/master/LICENSE)
+      Copyright (c) 2014-2015 Benjamin Keen
+      [MIT License](https://github.com/benkeen/d3pie/blob/master/LICENSE)
 
+-   [jQuery Sparklines](http://omnipotent.net/jquery.sparkline/)
 
-- [d3pie](https://github.com/benkeen/d3pie)
+      Copyright 2009-2012, Splunk Inc.
+      [New BSD License](http://opensource.org/licenses/BSD-3-Clause)
 
-    Copyright (c) 2014-2015 Benjamin Keen
-    [MIT License](https://github.com/benkeen/d3pie/blob/master/LICENSE)
+-   [Peity](http://benpickles.github.io/peity/)
 
+      Copyright 2009-2015, Ben Pickles
+      [MIT License](https://github.com/benpickles/peity/blob/master/LICENCE)
 
-- [jQuery Sparklines](http://omnipotent.net/jquery.sparkline/)
+-   [morris.js](http://morrisjs.github.io/morris.js/)
 
-    Copyright 2009-2012, Splunk Inc.
-    [New BSD License](http://opensource.org/licenses/BSD-3-Clause)
+      Copyright 2013, Olly Smith
+      [Simplified BSD License](http://morrisjs.github.io/morris.js/)
 
+-   [Raphaël](http://dmitrybaranovskiy.github.io/raphael/)
 
-- [Peity](http://benpickles.github.io/peity/)
+      Copyright 2008, Dmitry Baranovskiy
+      [MIT License](http://dmitrybaranovskiy.github.io/raphael/license.html)
 
-    Copyright 2009-2015, Ben Pickles
-    [MIT License](https://github.com/benpickles/peity/blob/master/LICENCE)
+-   [C3](http://c3js.org/)
 
+      Copyright 2013, Masayuki Tanaka
+      [MIT License](https://github.com/masayuki0812/c3/blob/master/LICENSE)
 
-- [morris.js](http://morrisjs.github.io/morris.js/)
+-   [D3](http://d3js.org/)
 
-    Copyright 2013, Olly Smith
-    [Simplified BSD License](http://morrisjs.github.io/morris.js/)
+      Copyright 2015, Mike Bostock
+      [BSD License](http://opensource.org/licenses/BSD-3-Clause)
 
+-   [jQuery](https://jquery.org/)
 
-- [Raphaël](http://dmitrybaranovskiy.github.io/raphael/)
+      Copyright 2015, jQuery Foundation
+      [MIT License](https://jquery.org/license/)
 
-    Copyright 2008, Dmitry Baranovskiy
-    [MIT License](http://dmitrybaranovskiy.github.io/raphael/license.html)
+-   [Bootstrap](http://getbootstrap.com/getting-started/)
 
+      Copyright 2015, Twitter
+      [MIT License](https://github.com/twbs/bootstrap/blob/v4-dev/LICENSE)
 
-- [C3](http://c3js.org/)
+-   [Bootstrap Toggle](http://www.bootstraptoggle.com/)
 
-    Copyright 2013, Masayuki Tanaka
-    [MIT License](https://github.com/masayuki0812/c3/blob/master/LICENSE)
+      Copyright (c) 2011-2014 Min Hur, The New York Times Company
+      [MIT License](https://github.com/minhur/bootstrap-toggle/blob/master/LICENSE)
 
+-   [Bootstrap-slider](http://seiyria.com/bootstrap-slider/)
 
-- [D3](http://d3js.org/)
+      Copyright 2017 Kyle Kemp, Rohit Kalkur, and contributors
+      [MIT License](https://github.com/seiyria/bootstrap-slider/blob/master/LICENSE.md)
 
-    Copyright 2015, Mike Bostock
-    [BSD License](http://opensource.org/licenses/BSD-3-Clause)
+-   [bootstrap-table](http://bootstrap-table.wenzhixin.net.cn/)
 
+      Copyright (c) 2012-2016 Zhixin Wen [wenzhixin2010@gmail.com](mailto:wenzhixin2010@gmail.com)
+      [MIT License](https://github.com/wenzhixin/bootstrap-table/blob/master/LICENSE)
 
-- [jQuery](https://jquery.org/)
+-   [tableExport.jquery.plugin](https://github.com/hhurz/tableExport.jquery.plugin)
 
-    Copyright 2015, jQuery Foundation
-    [MIT License](https://jquery.org/license/)
+      Copyright (c) 2015,2016 hhurz
+      [MIT License](https://github.com/hhurz/tableExport.jquery.plugin/blob/master/LICENSE)
 
+-   [perfect-scrollbar](https://jamesflorentino.github.io/nanoScrollerJS/)
 
-- [Bootstrap](http://getbootstrap.com/getting-started/)
+      Copyright 2016, Hyunje Alex Jun and other contributors
+      [MIT License](https://github.com/noraesae/perfect-scrollbar/blob/master/LICENSE)
 
-    Copyright 2015, Twitter
-    [MIT License](https://github.com/twbs/bootstrap/blob/v4-dev/LICENSE)
+-   [FontAwesome](https://fortawesome.github.io/Font-Awesome/)
 
+      Created by Dave Gandy
+      Font license: [SIL OFL 1.1](http://scripts.sil.org/OFL)
+      Icon license [Creative Commons Attribution 4.0 (CC-BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
+      Code license: [MIT License](http://opensource.org/licenses/mit-license.html)
 
-- [Bootstrap Toggle](http://www.bootstraptoggle.com/)
+-   [node-extend](https://github.com/justmoon/node-extend)
 
-    Copyright (c) 2011-2014 Min Hur, The New York Times Company
-    [MIT License](https://github.com/minhur/bootstrap-toggle/blob/master/LICENSE)
+      Copyright 2014, Stefan Thomas
+      [MIT License](https://github.com/justmoon/node-extend/blob/master/LICENSE)
 
+-   [node-net-snmp](https://github.com/stephenwvickers/node-net-snmp)
 
-- [Bootstrap-slider](http://seiyria.com/bootstrap-slider/)
+      Copyright 2013, Stephen Vickers
+      [MIT License](https://github.com/nospaceships/node-net-snmp#license)
 
-    Copyright 2017 Kyle Kemp, Rohit Kalkur, and contributors
-    [MIT License](https://github.com/seiyria/bootstrap-slider/blob/master/LICENSE.md)
+-   [node-asn1-ber](https://github.com/stephenwvickers/node-asn1-ber)
 
+      Copyright 2017, Stephen Vickers
+      Copyright 2011, Mark Cavage
+      [MIT License](https://github.com/nospaceships/node-asn1-ber#license)
 
-- [bootstrap-table](http://bootstrap-table.wenzhixin.net.cn/)
+-   [pixl-xml](https://github.com/jhuckaby/pixl-xml)
 
-    Copyright (c) 2012-2016 Zhixin Wen <wenzhixin2010@gmail.com>
-    [MIT License](https://github.com/wenzhixin/bootstrap-table/blob/master/LICENSE)
+      Copyright 2015, Joseph Huckaby
+      [MIT License](https://github.com/jhuckaby/pixl-xml#license)
 
+-   [sensors](https://github.com/paroj/sensors.py)
 
-- [tableExport.jquery.plugin](https://github.com/hhurz/tableExport.jquery.plugin)
+      Copyright 2014, Pavel Rojtberg
+      [LGPL 2.1 License](http://opensource.org/licenses/LGPL-2.1)
 
-    Copyright (c) 2015,2016 hhurz
-    [MIT License](https://github.com/hhurz/tableExport.jquery.plugin/blob/master/LICENSE)
+-   [PyYAML](https://bitbucket.org/blackjack/pysensors)
 
+      Copyright 2006, Kirill Simonov
+      [MIT License](https://github.com/yaml/pyyaml/blob/master/LICENSE)
 
-- [perfect-scrollbar](https://jamesflorentino.github.io/nanoScrollerJS/)
+-   [urllib3](https://github.com/shazow/urllib3)
 
-    Copyright 2016, Hyunje Alex Jun and other contributors
-    [MIT License](https://github.com/noraesae/perfect-scrollbar/blob/master/LICENSE)
+      Copyright 2008-2016 Andrey Petrov and [contributors](https://github.com/shazow/urllib3/blob/master/CONTRIBUTORS.txt)
+      [MIT License](https://github.com/shazow/urllib3/blob/master/LICENSE.txt)
 
+-   [lz-string](http://pieroxy.net/blog/pages/lz-string/index.html)
 
-- [FontAwesome](https://fortawesome.github.io/Font-Awesome/)
+      Copyright 2013 Pieroxy
+      [WTFPL License](http://pieroxy.net/blog/pages/lz-string/index.html#inline_menu_10)
 
-    Created by Dave Gandy
-    Font license: [SIL OFL 1.1](http://scripts.sil.org/OFL)
-    Icon license [Creative Commons Attribution 4.0 (CC-BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
-    Code license: [MIT License](http://opensource.org/licenses/mit-license.html)
+-   [pako](http://nodeca.github.io/pako/)
 
+      Copyright 2014-2017 Vitaly Puzrin and Andrei Tuputcyn
+      [MIT License](https://github.com/nodeca/pako/blob/master/LICENSE)
 
-- [node-extend](https://github.com/justmoon/node-extend)
+-   [clipboard-polyfill](https://github.com/lgarron/clipboard-polyfill)
 
-    Copyright 2014, Stefan Thomas
-    [MIT License](https://github.com/justmoon/node-extend/blob/master/LICENSE)
+      Copyright (c) 2014 Lucas Garron
+      [MIT License](https://github.com/lgarron/clipboard-polyfill/blob/master/LICENSE.md)
 
+-   [Utilities for writing code that runs on Python 2 and 3](collectors/python.d.plugin/python_modules/urllib3/packages/six.py)
 
-- [node-net-snmp](https://github.com/stephenwvickers/node-net-snmp)
+      Copyright (c) 2010-2015 Benjamin Peterson
+      [MIT License](https://github.com/benjaminp/six/blob/master/LICENSE)
 
-    Copyright 2013, Stephen Vickers
-    [MIT License](https://github.com/nospaceships/node-net-snmp#license)
+-   [mcrcon](https://github.com/barneygale/MCRcon)
 
+      Copyright (C) 2015 Barnaby Gale
+      [MIT License](https://raw.githubusercontent.com/barneygale/MCRcon/master/COPYING.txt)
 
-- [node-asn1-ber](https://github.com/stephenwvickers/node-asn1-ber)
+-   [monotonic](https://github.com/atdt/monotonic)
 
-    Copyright 2017, Stephen Vickers
-    Copyright 2011, Mark Cavage
-    [MIT License](https://github.com/nospaceships/node-asn1-ber#license)
+      Copyright 2014, 2015, 2016 Ori Livneh [ori@wikimedia.org](mailto:ori@wikimedia.org)
+      [Apache-2.0](http://www.apache.org/licenses/LICENSE-2.0)
 
-
-- [pixl-xml](https://github.com/jhuckaby/pixl-xml)
-
-    Copyright 2015, Joseph Huckaby
-    [MIT License](https://github.com/jhuckaby/pixl-xml#license)
-
-
-- [sensors](https://github.com/paroj/sensors.py)
-
-    Copyright 2014, Pavel Rojtberg
-    [LGPL 2.1 License](http://opensource.org/licenses/LGPL-2.1)
-
-
-- [PyYAML](https://bitbucket.org/blackjack/pysensors)
-
-    Copyright 2006, Kirill Simonov
-    [MIT License](https://github.com/yaml/pyyaml/blob/master/LICENSE)
-
-
-- [urllib3](https://github.com/shazow/urllib3)
-
-    Copyright 2008-2016 Andrey Petrov and [contributors](https://github.com/shazow/urllib3/blob/master/CONTRIBUTORS.txt)
-    [MIT License](https://github.com/shazow/urllib3/blob/master/LICENSE.txt)
-
-
-- [lz-string](http://pieroxy.net/blog/pages/lz-string/index.html)
-
-    Copyright 2013 Pieroxy
-    [WTFPL License](http://pieroxy.net/blog/pages/lz-string/index.html#inline_menu_10)
-
-
-- [pako](http://nodeca.github.io/pako/)
-
-    Copyright 2014-2017 Vitaly Puzrin and Andrei Tuputcyn
-    [MIT License](https://github.com/nodeca/pako/blob/master/LICENSE)
-
-
-- [clipboard-polyfill](https://github.com/lgarron/clipboard-polyfill)
-
-    Copyright (c) 2014 Lucas Garron
-    [MIT License](https://github.com/lgarron/clipboard-polyfill/blob/master/LICENSE.md)
-
-
-- [Utilities for writing code that runs on Python 2 and 3](collectors/python.d.plugin/python_modules/urllib3/packages/six.py)
-
-    Copyright (c) 2010-2015 Benjamin Peterson
-    [MIT License](https://github.com/benjaminp/six/blob/master/LICENSE)
-
-
-- [mcrcon](https://github.com/barneygale/MCRcon)
-
-    Copyright (C) 2015 Barnaby Gale
-    [MIT License](https://raw.githubusercontent.com/barneygale/MCRcon/master/COPYING.txt)
-
-- [monotonic](https://github.com/atdt/monotonic)
-
-    Copyright 2014, 2015, 2016 Ori Livneh <ori@wikimedia.org>
-    [Apache-2.0](http://www.apache.org/licenses/LICENSE-2.0)
-
-[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2FREDISTRIBUTED&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
+[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2FREDISTRIBUTED&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)

+ 10 - 10
SECURITY.md

@@ -2,9 +2,9 @@
 
 ## Supported Versions
 
-| Version | Supported          |
-| ------- | ------------------ |
-| Latest  | Yes                |
+| Version | Supported |
+|-------  | --------- |
+| Latest  | Yes       |
 
 ## Reporting a Vulnerability
 
@@ -14,15 +14,15 @@ To make a report, please create a post [here](https://groups.google.com/a/netdat
 
 ### When Should I Report a Vulnerability?
 
-- You think you discovered a potential security vulnerability in Netdata
-- You are unsure how a vulnerability affects Netdata
-- You think you discovered a vulnerability in another project that Netdata depends on (e.g. python, node, etc)
+-   You think you discovered a potential security vulnerability in Netdata
+-   You are unsure how a vulnerability affects Netdata
+-   You think you discovered a vulnerability in another project that Netdata depends on (e.g. python, node, etc)
 
 ### When Should I NOT Report a Vulnerability?
 
-- You need help tuning Netdata for security
-- You need help applying security related updates
-- Your issue is not security related
+-   You need help tuning Netdata for security
+-   You need help applying security related updates
+-   Your issue is not security related
 
 ### Security Vulnerability Response
 
@@ -40,4 +40,4 @@ A public disclosure date is negotiated by the Netdata team and the bug submitter
 
 Every time a security issue is fixed in Netdata, we immediately release a new version of it. So, to get notified of all security incidents, please subscribe to our releases on github.
 
-[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fdocs%2FSECURITY&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
+[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fdocs%2FSECURITY&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)

+ 99 - 100
backends/README.md

@@ -14,65 +14,65 @@ X seconds (though, it can send them per second if you need it to).
 
 ## features
 
-1. Supported backends
+1.  Supported backends
 
-   - **graphite** (`plaintext interface`, used by **Graphite**, **InfluxDB**, **KairosDB**,
-     **Blueflood**, **ElasticSearch** via logstash tcp input and the graphite codec, etc)
+    -   **graphite** (`plaintext interface`, used by **Graphite**, **InfluxDB**, **KairosDB**,
+        **Blueflood**, **ElasticSearch** via logstash tcp input and the graphite codec, etc)
 
-     metrics are sent to the backend server as `prefix.hostname.chart.dimension`. `prefix` is
-     configured below, `hostname` is the hostname of the machine (can also be configured).
+        metrics are sent to the backend server as `prefix.hostname.chart.dimension`. `prefix` is
+        configured below, `hostname` is the hostname of the machine (can also be configured).
 
-   - **opentsdb** (`telnet or HTTP interfaces`, used by **OpenTSDB**, **InfluxDB**, **KairosDB**, etc)
+    -   **opentsdb** (`telnet or HTTP interfaces`, used by **OpenTSDB**, **InfluxDB**, **KairosDB**, etc)
 
-     metrics are sent to opentsdb as `prefix.chart.dimension` with tag `host=hostname`.
+        metrics are sent to opentsdb as `prefix.chart.dimension` with tag `host=hostname`.
 
-   - **json** document DBs
+    -   **json** document DBs
 
-     metrics are sent to a document db, `JSON` formatted.
+        metrics are sent to a document db, `JSON` formatted.
 
-   - **prometheus** is described at [prometheus page](prometheus/) since it pulls data from Netdata.
+    -   **prometheus** is described at [prometheus page](prometheus/) since it pulls data from Netdata.
 
-   - **prometheus remote write** (a binary snappy-compressed protocol buffer encoding over HTTP used by
-     **Elasticsearch**, **Gnocchi**, **Graphite**, **InfluxDB**, **Kafka**, **OpenTSDB**,
-     **PostgreSQL/TimescaleDB**, **Splunk**, **VictoriaMetrics**,
-     and a lot of other [storage providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage))
+    -   **prometheus remote write** (a binary snappy-compressed protocol buffer encoding over HTTP used by
+        **Elasticsearch**, **Gnocchi**, **Graphite**, **InfluxDB**, **Kafka**, **OpenTSDB**,
+        **PostgreSQL/TimescaleDB**, **Splunk**, **VictoriaMetrics**,
+        and a lot of other [storage providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage))
 
-     metrics are labeled in the format, which is used by Netdata for the [plaintext prometheus protocol](prometheus/).
-     Notes on using the remote write backend are [here](prometheus/remote_write/).
+        metrics are labeled in the format, which is used by Netdata for the [plaintext prometheus protocol](prometheus/).
+        Notes on using the remote write backend are [here](prometheus/remote_write/).
 
-   - **AWS Kinesis Data Streams**
+    -   **AWS Kinesis Data Streams**
 
-     metrics are sent to the service in `JSON` format.
+        metrics are sent to the service in `JSON` format.
 
-   - **MongoDB**
+    -   **MongoDB**
 
-     metrics are sent to the database in `JSON` format.
+        metrics are sent to the database in `JSON` format.
 
-2. Only one backend may be active at a time.
+2.  Only one backend may be active at a time.
 
-3. Netdata can filter metrics (at the chart level), to send only a subset of the collected metrics.
+3.  Netdata can filter metrics (at the chart level), to send only a subset of the collected metrics.
 
-4. Netdata supports three modes of operation for all backends:
+4.  Netdata supports three modes of operation for all backends:
 
-   - `as-collected` sends to backends the metrics as they are collected, in the units they are collected.
-   So, counters are sent as counters and gauges are sent as gauges, much like all data collectors do.
-   For example, to calculate CPU utilization in this format, you need to know how to convert kernel ticks to percentage.
+    -   `as-collected` sends to backends the metrics as they are collected, in the units they are collected.
+        So, counters are sent as counters and gauges are sent as gauges, much like all data collectors do.
+        For example, to calculate CPU utilization in this format, you need to know how to convert kernel ticks to percentage.
 
-   - `average` sends to backends normalized metrics from the Netdata database.
-   In this mode, all metrics are sent as gauges, in the units Netdata uses. This abstracts data collection
-   and simplifies visualization, but you will not be able to copy and paste queries from other sources to convert units.
-   For example, CPU utilization percentage is calculated by Netdata, so Netdata will convert ticks to percentage and
-   send the average percentage to the backend.
+    -   `average` sends to backends normalized metrics from the Netdata database.
+        In this mode, all metrics are sent as gauges, in the units Netdata uses. This abstracts data collection
+        and simplifies visualization, but you will not be able to copy and paste queries from other sources to convert units.
+        For example, CPU utilization percentage is calculated by Netdata, so Netdata will convert ticks to percentage and
+        send the average percentage to the backend.
 
-   - `sum` or `volume`: the sum of the interpolated values shown on the Netdata graphs is sent to the backend.
-   So, if Netdata is configured to send data to the backend every 10 seconds, the sum of the 10 values shown on the
-   Netdata charts will be used.
+    -   `sum` or `volume`: the sum of the interpolated values shown on the Netdata graphs is sent to the backend.
+        So, if Netdata is configured to send data to the backend every 10 seconds, the sum of the 10 values shown on the
+        Netdata charts will be used.
 
 Time-series databases suggest to collect the raw values (`as-collected`). If you plan to invest on building your monitoring around a time-series database and you already know (or you will invest in learning) how to convert units and normalize the metrics in Grafana or other visualization tools, we suggest to use `as-collected`.
 
 If, on the other hand, you just need long term archiving of Netdata metrics and you plan to mainly work with Netdata, we suggest to use `average`. It decouples visualization from data collection, so it will generally be a lot simpler. Furthermore, if you use `average`, the charts shown in the back-end will match exactly what you see in Netdata, which is not necessarily true for the other modes of operation.
 
-5. This code is smart enough, not to slow down Netdata, independently of the speed of the backend server.
+5.  This code is smart enough, not to slow down Netdata, independently of the speed of the backend server.
 
 ## configuration
 
@@ -96,25 +96,25 @@ of `netdata.conf` from your Netdata):
     send names instead of ids = yes
 ```
 
-- `enabled = yes | no`, enables or disables sending data to a backend
+-   `enabled = yes | no`, enables or disables sending data to a backend
 
-- `type = graphite | opentsdb:telnet | opentsdb:http | opentsdb:https | json | kinesis | mongodb`, selects the backend type
+-   `type = graphite | opentsdb:telnet | opentsdb:http | opentsdb:https | json | kinesis | mongodb`, selects the backend type
 
-- `destination = host1 host2 host3 ...`, accepts **a space separated list** of hostnames,
-   IPs (IPv4 and IPv6) and ports to connect to.
-   Netdata will use the **first available** to send the metrics.
+-   `destination = host1 host2 host3 ...`, accepts **a space separated list** of hostnames,
+     IPs (IPv4 and IPv6) and ports to connect to.
+     Netdata will use the **first available** to send the metrics.
 
-   The format of each item in this list, is: `[PROTOCOL:]IP[:PORT]`.
+     The format of each item in this list, is: `[PROTOCOL:]IP[:PORT]`.
 
-   `PROTOCOL` can be `udp` or `tcp`. `tcp` is the default and only supported by the current backends.
+     `PROTOCOL` can be `udp` or `tcp`. `tcp` is the default and only supported by the current backends.
 
-   `IP` can be `XX.XX.XX.XX` (IPv4), or `[XX:XX...XX:XX]` (IPv6).
-   For IPv6 you can to enclose the IP in `[]` to separate it from the port.
+     `IP` can be `XX.XX.XX.XX` (IPv4), or `[XX:XX...XX:XX]` (IPv6).
+     For IPv6 you can to enclose the IP in `[]` to separate it from the port.
 
-   `PORT` can be a number of a service name. If omitted, the default port for the backend will be used
-   (graphite = 2003, opentsdb = 4242).
+     `PORT` can be a number of a service name. If omitted, the default port for the backend will be used
+     (graphite = 2003, opentsdb = 4242).
 
-   Example IPv4:
+     Example IPv4:
 
 ```
    destination = 10.11.14.2:4242 10.11.14.3:4242 10.11.14.4:4242
@@ -139,71 +139,71 @@ of `netdata.conf` from your Netdata):
    The MongoDB backend doesn't use the `destination` option for its configuration. It uses the `mongodb.conf`
    [configuration file](mongodb/README.md) instead.
 
-- `data source = as collected`, or `data source = average`, or `data source = sum`, selects the kind of
-   data that will be sent to the backend.
+-   `data source = as collected`, or `data source = average`, or `data source = sum`, selects the kind of
+     data that will be sent to the backend.
 
-- `hostname = my-name`, is the hostname to be used for sending data to the backend server. By default
-   this is `[global].hostname`.
+-   `hostname = my-name`, is the hostname to be used for sending data to the backend server. By default
+     this is `[global].hostname`.
 
-- `prefix = Netdata`, is the prefix to add to all metrics.
+-   `prefix = Netdata`, is the prefix to add to all metrics.
 
-- `update every = 10`, is the number of seconds between sending data to the backend. Netdata will add
-   some randomness to this number, to prevent stressing the backend server when many Netdata servers send
-   data to the same backend. This randomness does not affect the quality of the data, only the time they
-   are sent.
+-   `update every = 10`, is the number of seconds between sending data to the backend. Netdata will add
+     some randomness to this number, to prevent stressing the backend server when many Netdata servers send
+     data to the same backend. This randomness does not affect the quality of the data, only the time they
+     are sent.
 
-- `buffer on failures = 10`, is the number of iterations (each iteration is `[backend].update every` seconds)
-   to buffer data, when the backend is not available. If the backend fails to receive the data after that
-   many failures, data loss on the backend is expected (Netdata will also log it).
+-   `buffer on failures = 10`, is the number of iterations (each iteration is `[backend].update every` seconds)
+     to buffer data, when the backend is not available. If the backend fails to receive the data after that
+     many failures, data loss on the backend is expected (Netdata will also log it).
 
-- `timeout ms = 20000`, is the timeout in milliseconds to wait for the backend server to process the data.
-   By default this is `2 * update_every * 1000`.
+-   `timeout ms = 20000`, is the timeout in milliseconds to wait for the backend server to process the data.
+     By default this is `2 * update_every * 1000`.
 
-- `send hosts matching = localhost *` includes one or more space separated patterns, using ` * ` as wildcard
-   (any number of times within each pattern). The patterns are checked against the hostname (the localhost
-   is always checked as `localhost`), allowing us to filter which hosts will be sent to the backend when
-   this Netdata is a central Netdata aggregating multiple hosts. A pattern starting with ` ! ` gives a
-   negative match. So to match all hosts named `*db*` except hosts containing `*slave*`, use
-   `!*slave* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive
-   or negative).
+-   `send hosts matching = localhost *` includes one or more space separated patterns, using `*` as wildcard
+     (any number of times within each pattern). The patterns are checked against the hostname (the localhost
+     is always checked as `localhost`), allowing us to filter which hosts will be sent to the backend when
+     this Netdata is a central Netdata aggregating multiple hosts. A pattern starting with `!` gives a
+     negative match. So to match all hosts named `*db*` except hosts containing `*slave*`, use
+     `!*slave* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive
+     or negative).
 
-- `send charts matching = *` includes one or more space separated patterns, using ` * ` as wildcard (any
-   number of times within each pattern). The patterns are checked against both chart id and chart name.
-   A pattern starting with ` ! ` gives a negative match. So to match all charts named `apps.*`
-   except charts ending in `*reads`, use `!*reads apps.*` (so, the order is important: the first pattern
-   matching the chart id or the chart name will be used - positive or negative).
+-   `send charts matching = *` includes one or more space separated patterns, using `*` as wildcard (any
+     number of times within each pattern). The patterns are checked against both chart id and chart name.
+     A pattern starting with `!` gives a negative match. So to match all charts named `apps.*`
+     except charts ending in `*reads`, use `!*reads apps.*` (so, the order is important: the first pattern
+     matching the chart id or the chart name will be used - positive or negative).
 
-- `send names instead of ids = yes | no` controls the metric names Netdata should send to backend.
-   Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read
-   by the system and names are human friendly labels (also unique). Most charts and metrics have the same
-   ID and name, but in several cases they are different: disks with device-mapper, interrupts, QoS classes,
-   statsd synthetic charts, etc.
+-   `send names instead of ids = yes | no` controls the metric names Netdata should send to backend.
+     Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read
+     by the system and names are human friendly labels (also unique). Most charts and metrics have the same
+     ID and name, but in several cases they are different: disks with device-mapper, interrupts, QoS classes,
+     statsd synthetic charts, etc.
 
-- `host tags = list of TAG=VALUE` defines tags that should be appended on all metrics for the given host.
-   These are currently only sent to opentsdb and prometheus. Please use the appropriate format for each
-   time-series db. For example opentsdb likes them like `TAG1=VALUE1 TAG2=VALUE2`, but prometheus like
-   `tag1="value1",tag2="value2"`. Host tags are mirrored with database replication (streaming of metrics
-   between Netdata servers).
+-   `host tags = list of TAG=VALUE` defines tags that should be appended on all metrics for the given host.
+     These are currently only sent to opentsdb and prometheus. Please use the appropriate format for each
+     time-series db. For example opentsdb likes them like `TAG1=VALUE1 TAG2=VALUE2`, but prometheus like
+     `tag1="value1",tag2="value2"`. Host tags are mirrored with database replication (streaming of metrics
+     between Netdata servers).
 
 ## monitoring operation
 
 Netdata provides 5 charts:
 
-1. **Buffered metrics**, the number of metrics Netdata added to the buffer for dispatching them to the
-   backend server.
+1.  **Buffered metrics**, the number of metrics Netdata added to the buffer for dispatching them to the
+    backend server.
 
-2. **Buffered data size**, the amount of data (in KB) Netdata added the buffer.
+2.  **Buffered data size**, the amount of data (in KB) Netdata added the buffer.
 
-3. ~~**Backend latency**, the time the backend server needed to process the data Netdata sent.
-   If there was a re-connection involved, this includes the connection time.~~
-   (this chart has been removed, because it only measures the time Netdata needs to give the data
-   to the O/S - since the backend servers do not ack the reception, Netdata does not have any means
-   to measure this properly).
+3.  ~~**Backend latency**, the time the backend server needed to process the data Netdata sent.
+    If there was a re-connection involved, this includes the connection time.~~
+    (this chart has been removed, because it only measures the time Netdata needs to give the data
+    to the O/S - since the backend servers do not ack the reception, Netdata does not have any means
+    to measure this properly).
 
-4. **Backend operations**, the number of operations performed by Netdata.
+4.  **Backend operations**, the number of operations performed by Netdata.
 
-5. **Backend thread CPU usage**, the CPU resources consumed by the Netdata thread, that is responsible
-   for sending the metrics to the backend server.
+5.  **Backend thread CPU usage**, the CPU resources consumed by the Netdata thread, that is responsible
+    for sending the metrics to the backend server.
 
 ![image](https://cloud.githubusercontent.com/assets/2662304/20463536/eb196084-af3d-11e6-8ee5-ddbd3b4d8449.png)
 
@@ -213,12 +213,11 @@ The latest version of the alarms configuration for monitoring the backend is [he
 
 Netdata adds 4 alarms:
 
-1. `backend_last_buffering`, number of seconds since the last successful buffering of backend data
-2. `backend_metrics_sent`, percentage of metrics sent to the backend server
-3. `backend_metrics_lost`, number of metrics lost due to repeating failures to contact the backend server
-4. ~~`backend_slow`, the percentage of time between iterations needed by the backend time to process the data sent by Netdata~~ (this was misleading and has been removed).
+1.  `backend_last_buffering`, number of seconds since the last successful buffering of backend data
+2.  `backend_metrics_sent`, percentage of metrics sent to the backend server
+3.  `backend_metrics_lost`, number of metrics lost due to repeating failures to contact the backend server
+4.  ~~`backend_slow`, the percentage of time between iterations needed by the backend time to process the data sent by Netdata~~ (this was misleading and has been removed).
 
 ![image](https://cloud.githubusercontent.com/assets/2662304/20463779/a46ed1c2-af43-11e6-91a5-07ca4533cac3.png)
 
-
-[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
+[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)

+ 13 - 5
backends/WALKTHROUGH.md

@@ -1,6 +1,7 @@
 # Netdata, Prometheus, Grafana stack
 
 ## Intro
+
 In this article I will walk you through the basics of getting Netdata,
 Prometheus and Grafana all working together and monitoring your application
 servers. This article will be using docker on your local workstation. We will be
@@ -11,6 +12,7 @@ without cloud accounts or access to VMs can try this out and for it’s speed of
 deployment.
 
 ## Why Netdata, Prometheus, and Grafana
+
 Some time ago I was introduced to Netdata by a coworker. We were attempting to
 troubleshoot python code which seemed to be bottlenecked. I was instantly
 impressed by the amount of metrics Netdata exposes to you. I quickly added
@@ -40,6 +42,7 @@ together to create a modern monitoring stack. This stack will offer you
 visibility into your application and systems performance.
 
 ## Getting Started - Netdata
+
 To begin let’s create our container which we will install Netdata on. We need
 to run a container, forward the necessary port that Netdata listens on, and
 attach a tty so we can interact with the bash shell on the container. But
@@ -101,6 +104,7 @@ observing is “system”. You can begin to draw links between the charts in Net
 to the prometheus metrics format in this manner.
 
 ## Prometheus
+
 We will be installing prometheus in a container for purpose of demonstration.
 While prometheus does have an official container I would like to walk through
 the install process and setup on a fresh container. This will allow anyone
@@ -189,9 +193,11 @@ scrape_configs:
 ```
 
 Let’s start prometheus once again by running `/opt/prometheus/prometheus`. If we
-now navigate to prometheus at ‘<http://localhost:9090/targets>’ we should see our
+
+now navigate to prometheus at ‘<http://localhost:9090/targets’> we should see our
+
 target being successfully scraped. If we now go back to the Prometheus’s
-homepage and begin to type ‘netdata_’  Prometheus should auto complete metrics
+homepage and begin to type ‘netdata\_’  Prometheus should auto complete metrics
 it is now scraping.
 
 ![](https://github.com/ldelossa/NetdataTutorial/raw/master/Screen%20Shot%202017-07-28%20at%205.13.43%20PM.png)
@@ -247,7 +253,7 @@ this point to read [this page](../backends/prometheus/#using-netdata-with-promet
 The key point here is that NetData can export metrics from its internal DB or
 can send metrics “as-collected” by specifying the ‘source=as-collected’ url
 parameter like so.
-http://localhost:19999/api/v1/allmetrics?format=prometheus&help=yes&types=yes&source=as-collected
+<http://localhost:19999/api/v1/allmetrics?format=prometheus&help=yes&types=yes&source=as-collected>
 If you choose to use this method you will need to use Prometheus's set of
 functions here: <https://prometheus.io/docs/querying/functions/> to obtain useful
 metrics as you are now dealing with raw counters from the system. For example
@@ -258,6 +264,7 @@ that. If you find limitations then consider re-writing your queries using the
 raw data and using Prometheus functions to get the desired chart.
 
 ## Grafana
+
 Finally we make it to grafana. This is the easiest part in my opinion. This time
 we will actually run the official grafana docker container as all configuration
 we need to do is done via the GUI. Let’s run the following command:
@@ -266,7 +273,8 @@ we need to do is done via the GUI. Let’s run the following command:
 docker run -i -p 3000:3000 --network=netdata-tutorial grafana/grafana
 ```
 
-This will get grafana running at ‘<http://localhost:3000/>’ Let’s go there and
+This will get grafana running at ‘<http://localhost:3000/’> Let’s go there and
+
 login using the credentials Admin:Admin.
 
 The first thing we want to do is click ‘Add data source’. Let’s make it look
@@ -291,4 +299,4 @@ about the monitoring system until Prometheus cannot keep up with your scale.
 Once this happens there are options presented in the Prometheus documentation
 for solving this. Hope this was helpful, happy monitoring.
 
-[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2FWALKTHROUGH&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
+[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2FWALKTHROUGH&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)

Some files were not shown because too many files changed in this diff