Browse Source

Add initial tooling for generating integrations.js file. (#15406)

* Fix link tags in deploy.

* Add initial tooling for generating integrations.js file.

* Skip integrations directory for eslint.

* Add README to explain how to generate integrations.js locally.

* Fix ID/name for top-level categories.

* Deduplicate categories entries.

* Properly render related resources information.

* Warn on and skip bad references for related resources.

* Add CI workflow to rebuild integrations as-needed.

* Add integrations.js to build artifacts.

* Fix actionlint complaints.

* Assorted template fixes.

* Add script to check collector metadata.

* Add default categories for collectors when they have no categories.

* Fix template formatting issues.

* Link related resources properly.

* Skip more sections in rendered output if they are not present in source data.

* Temporarily skip config syntax section.

It needs further work and is not critical at the moment.

* Fix metrics table rendering.

* Hide most overview content if method_description is empty.

* Fix metrics table rendering (again).

* Add detailed description to setup options section.

* Fix detailed description handling for config options.

* Fix config example folding logic.

* Fix multi-instance selection.

* Properly fix multi-instance selection.

* Add titles for labels and metrics charts.

* Include monitored instance name in integration ID.

This is required to disambiguate some ‘virtual’ integrations.

* Indicate if there are no alerts defined for an integration.

* Fix multi-instance in template.

* Improve warning handling in script and fix category handling.

* Hide debug messages by default.

* Fix invalid category name in cgroups plugin.

* Completely fix invalid categories in cgroups plugin.

* Warn about and ignore duplicate integration ids.

* Flag integration type in integrations list.

* Add configuration syntax samples.

* Fix issues in gen_integrations.py

* Validate categories.yaml on load.

* Add support for handling deployment information.

* Fix bugs in gen_integrations.py

* Add code to handle exporters.

* Add link to integrations pointing to their source files.

* Fix table justification.

* Add notification handling to script.

Also tidy up a few other things.

* Fix numerous bugs in gen_integrations.py

* remove trailing space from deploy.yaml command

* make availability one column

* Switch back to multiple columns for availability.

And also switch form +/- to a dot for positive and empty cell for
negative.

* Render setup description.

* Fix platform info rendering in deploy integrations.

* Fix sourcing of cloud-notifications metadata.

* Fix rendering of empty metrics.

* Fix alerts template.

* Add per-instance templating for templated keys.

* Fix go plugin links.

* Fix overview template.

* Fix handling of exporters.

* Fix loading of cloud notification integrations.

* Always show full collector overview.

* Add static troubleshooting content when appropriate.

* Assorted deploy integration updates.

* Add initial copy of integrations.js.

---------

Co-authored-by: Fotis Voutsas <fotis@netdata.cloud>
Austin S. Hemmelgarn 1 year ago
parent
commit
183bb1db19

+ 1 - 0
.eslintignore

@@ -1,3 +1,4 @@
 **/*{.,-}min.js
+integrations/*
 web/gui/v1/*
 web/gui/v2/*

+ 3 - 2
.github/workflows/build.yml

@@ -519,6 +519,7 @@ jobs:
           mv ../static-archive/* . || exit 1
           ln -s ${{ needs.build-dist.outputs.distfile }} netdata-latest.tar.gz || exit 1
           cp ../packaging/version ./latest-version.txt || exit 1
+          cp ../integrations/integrations.js ./integrations.js || exit 1
           sha256sum -b ./* > sha256sums.txt || exit 1
           cat sha256sums.txt
       - name: Store Artifacts
@@ -753,7 +754,7 @@ jobs:
         with:
           allowUpdates: false
           artifactErrorsFailBuild: true
-          artifacts: 'final-artifacts/sha256sums.txt,final-artifacts/netdata-*.tar.gz,final-artifacts/netdata-*.gz.run'
+          artifacts: 'final-artifacts/sha256sums.txt,final-artifacts/netdata-*.tar.gz,final-artifacts/netdata-*.gz.run,final-artifacts/integrations.js'
           owner: netdata
           repo: netdata-nightlies
           body: Netdata nightly build for ${{ steps.version.outputs.date }}.
@@ -823,7 +824,7 @@ jobs:
         with:
           allowUpdates: false
           artifactErrorsFailBuild: true
-          artifacts: 'final-artifacts/sha256sums.txt,final-artifacts/netdata-*.tar.gz,final-artifacts/netdata-*.gz.run'
+          artifacts: 'final-artifacts/sha256sums.txt,final-artifacts/netdata-*.tar.gz,final-artifacts/netdata-*.gz.run,final-artifacts/integrations.js'
           draft: true
           tag: ${{ needs.normalize-tag.outputs.tag }}
           token: ${{ secrets.NETDATABOT_GITHUB_TOKEN }}

+ 88 - 0
.github/workflows/generate-integrations.yml

@@ -0,0 +1,88 @@
+---
+# CI workflow used to regenerate `integrations/integrations.js` when
+# relevant source files are changed.
+name: Generate Integrations
+on:
+  push:
+    branches:
+      - master
+    paths: # If any of these files change, we need to regenerate integrations.js.
+      - 'collectors/**/metadata.yaml'
+      - 'collectors/**/multi_metadata.yaml'
+      - 'integrations/templates/**'
+      - 'integrations/categories.yaml'
+      - 'integrations/gen_integrations.py'
+      - 'packaging/go.d.version'
+  workflow_dispatch: null
+concurrency: # This keeps multiple instances of the job from running concurrently for the same ref.
+  group: integrations-${{ github.ref }}
+  cancel-in-progress: true
+jobs:
+  generate-integrations:
+    name: Generate Integrations
+    runs-on: ubuntu-latest
+    steps:
+      - name: Checkout Agent
+        id: checkout-agent
+        uses: actions/checkout@v3
+        with:
+          fetch-depth: 1
+          submodules: recursive
+      - name: Get Go Ref
+        id: get-go-ref
+        run: echo "go_ref=$(cat packaging/go.d.version)" >> "${GITHUB_ENV}"
+      - name: Checkout Go
+        id: checkout-go
+        uses: actions/checkout@v3
+        with:
+          fetch-depth: 1
+          path: go.d.plugin
+          repository: netdata/go.d.plugin
+          ref: ${{ env.go_ref }}
+      - name: Prepare Dependencies
+        id: prep-deps
+        run: sudo apt-get install python3-jsonschema python3-referencing python3-jinja2 python3-ruamel.yaml
+      - name: Generate Integrations
+        id: generate
+        run: integrations/gen_integrations.py
+      - name: Clean Up Go Repo
+        id: clean-go
+        run: rm -rf go.d.plugin
+      - name: Create PR
+        id: create-pr
+        uses: peter-evans/create-pull-request@v5
+        with:
+          token: ${{ secrets.NETDATABOT_GITHUB_TOKEN }}
+          commit-message: Regenerate integrations.js
+          branch: integrations-regen
+          title: Regenerate integrations.js
+          body: |
+            Regenerate `integrations/integrations.js` based on the
+            latest code.
+
+            This PR was auto-generated by
+            `.github/workflows/generate-integrations.yml`.
+      - name: Failure Notification
+        uses: rtCamp/action-slack-notify@v2
+        env:
+          SLACK_COLOR: 'danger'
+          SLACK_FOOTER: ''
+          SLACK_ICON_EMOJI: ':github-actions:'
+          SLACK_TITLE: 'Integrations regeneration failed:'
+          SLACK_USERNAME: 'GitHub Actions'
+          SLACK_MESSAGE: |-
+              ${{ github.repository }}: Failed to create PR rebuilding integrations.js
+              Checkout Agent: ${{ steps.checkout-agent.outcome }}
+              Get Go Ref: ${{ steps.get-go-ref.outcome }}
+              Checkout Go: ${{ steps.checkout-go.outcome }}
+              Prepare Dependencies: ${{ steps.prep-deps.outcome }}
+              Generate Integrations: ${{ steps.generate.outcome }}
+              Clean Up Go Repository: ${{ steps.clean-go.outcome }}
+              Create PR: ${{ steps.create-pr.outcome }}
+          SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_URL }}
+        if: >-
+          ${{
+            failure()
+            && startsWith(github.ref, 'refs/heads/master')
+            && github.repository == 'netdata/netdata'
+          }}

+ 1 - 1
.github/workflows/review.yml

@@ -54,7 +54,7 @@ jobs:
         run: |
           if [ "${{ contains(github.event.pull_request.labels.*.name, 'run-ci/eslint') }}" = "true" ]; then
             echo "run=true" >> "${GITHUB_OUTPUT}"
-          elif git diff --name-only origin/${{ github.base_ref }} HEAD | grep -v "web/gui/v1" | grep -v "web/gui/v2" | grep -Eq '.*\.js|node\.d\.plugin\.in' ; then
+          elif git diff --name-only origin/${{ github.base_ref }} HEAD | grep -v "web/gui/v1" | grep -v "web/gui/v2" | grep -v "integrations/" | grep -Eq '.*\.js' ; then
             echo "run=true" >> "${GITHUB_OUTPUT}"
             echo 'JS files have changed, need to run ESLint.'
           else

+ 6 - 6
collectors/cgroups.plugin/metadata.yaml

@@ -406,7 +406,7 @@ modules:
         link: https://kubernetes.io/
         icon_filename: kubernetes.svg
         categories:
-          - data-collection.containers-vms
+          - data-collection.containers-and-vms
           - data-collection.kubernetes
       keywords:
         - k8s
@@ -977,7 +977,7 @@ modules:
         link: ""
         icon_filename: container.svg
         categories:
-          - data-collection.containers-vms
+          - data-collection.containers-and-vms
       keywords:
         - vms
         - virtualization
@@ -995,7 +995,7 @@ modules:
         link: ""
         icon_filename: lxc.png
         categories:
-          - data-collection.containers-vms
+          - data-collection.containers-and-vms
       keywords:
         - lxc
         - lxd
@@ -1013,7 +1013,7 @@ modules:
         link: ""
         icon_filename: libvirt.png
         categories:
-          - data-collection.containers-vms
+          - data-collection.containers-and-vms
       keywords:
         - libvirt
         - container
@@ -1030,7 +1030,7 @@ modules:
         link: ""
         icon_filename: ovirt.svg
         categories:
-          - data-collection.containers-vms
+          - data-collection.containers-and-vms
       keywords:
         - ovirt
         - container
@@ -1047,7 +1047,7 @@ modules:
         link: ""
         icon_filename: proxmox.png
         categories:
-          - data-collection.containers-vms
+          - data-collection.containers-and-vms
       keywords:
         - proxmox
         - container

+ 26 - 0
integrations/README.md

@@ -0,0 +1,26 @@
+To generate a copy of `integrations.js` locally, you will need:
+
+- Python 3.6 or newer (only tested on Python 3.10 currently, should work
+  on any version of Python newer than 3.6).
+- The following third-party Python modules:
+    - `jsonschema`
+    - `referencing`
+    - `jinja2`
+    - `ruamel.yaml`
+- A local checkout of https://github.com/netdata/netdata
+- A local checkout of https://github.com/netdata/go.d.plugin. The script
+  expects this to be checked out in a directory called `go.d.plugin`
+  in the root directory of the agent repo, though a symlink with that
+  name pointing at the actual location of the repo will work as well.
+
+The first two parts can be easily covered in a Linux environment, such
+as a VM or Docker container:
+
+- On Debian or Ubuntu: `apt-get install python3-jsonschema python3-referencing python3-jinja2 python3-ruamel.yaml`
+- On Alpine: `apk add py3-jsonschema py3-referencing py3-jinja2 py3-ruamel.yaml`
+- On Fedora or RHEL (EPEL is required on RHEL systems): `dnf install python3-jsonschema python3-referencing python3-jinja2 python3-ruamel-yaml`
+
+Once the environment is set up, simply run
+`integrations/gen_integrations.py` from the agent repo. Note that the
+script must be run _from this specific location_, as it uses it’s own
+path to figure out where all the files it needs are.

+ 3 - 2
integrations/categories.yaml

@@ -1,5 +1,5 @@
 - id: deploy
-  name: deploy
+  name: Deploy
   description: ""
   most_popular: true
   priority: 1
@@ -24,7 +24,7 @@
       priority: -1
       children: []
 - id: data-collection
-  name: data-collection
+  name: Data Collection
   description: ""
   most_popular: true
   priority: 2
@@ -34,6 +34,7 @@
       description: ""
       most_popular: false
       priority: -1
+      collector_default: true
       children: []
     - id: data-collection.ebpf
       name: eBPF

+ 89 - 0
integrations/check_collector_metadata.py

@@ -0,0 +1,89 @@
+#!/usr/bin/env python3
+
+import sys
+
+from pathlib import Path
+
+from jsonschema import ValidationError
+
+from gen_integrations import (CATEGORIES_FILE, SINGLE_PATTERN, MULTI_PATTERN, SINGLE_VALIDATOR, MULTI_VALIDATOR,
+                              load_yaml, get_category_sets)
+
+
+def main():
+    if len(sys.argv) != 2:
+        print(':error:This script takes exactly one argument.')
+        return 2
+
+    check_path = Path(sys.argv[1])
+
+    if not check_path.is_file():
+        print(f':error file={ check_path }:{ check_path } does not appear to be a regular file.')
+        return 1
+
+    if check_path.match(SINGLE_PATTERN):
+        variant = 'single'
+        print(f':debug:{ check_path } appears to be single-module metadata.')
+    elif check_path.match(MULTI_PATTERN):
+        variant = 'multi'
+        print(f':debug:{ check_path } appears to be multi-module metadata.')
+    else:
+        print(f':error file={ check_path }:{ check_path } does not match required file name format.')
+        return 1
+
+    categories = load_yaml(CATEGORIES_FILE)
+
+    if not categories:
+        print(':error:Failed to load categories file.')
+        return 2
+
+    _, valid_categories = get_category_sets(categories)
+
+    data = load_yaml(check_path)
+
+    if not data:
+        print(f':error file={ check_path }:Failed to load data from { check_path }.')
+        return 1
+
+    check_modules = []
+
+    if variant == 'single':
+        try:
+            SINGLE_VALIDATOR.validate(data)
+        except ValidationError as e:
+            print(f':error file={ check_path }:Failed to validate { check_path } against the schema.')
+            raise e
+        else:
+            check_modules.append(data)
+    elif variant == 'multi':
+        try:
+            MULTI_VALIDATOR.validate(data)
+        except ValidationError as e:
+            print(f':error file={ check_path }:Failed to validate { check_path } against the schema.')
+            raise e
+        else:
+            for item in data['modules']:
+                item['meta']['plugin_name'] = data['plugin_name']
+                check_modules.append(item)
+    else:
+        print(':error:Internal error encountered.')
+        return 2
+
+    failed = False
+
+    for idx, module in enumerate(check_modules):
+        invalid_cats = set(module['meta']['monitored_instance']['categories']) - valid_categories
+
+        if invalid_cats:
+            print(f':error file={ check_path }:Invalid categories found in module { idx } in { check_path }: { ", ".joiin(invalid_cats) }.')
+            failed = True
+
+    if failed:
+        return 1
+    else:
+        print('{ check_path } is a valid collector metadata file.')
+        return 0
+
+
+if __name__ == '__main__':
+    sys.exit(main())

+ 34 - 35
integrations/cloud-notifications/metadata.yaml

@@ -1,6 +1,6 @@
 # yamllint disable rule:line-length
 ---
-- id: 'notify-discord'
+- id: 'notify-cloud-discord'
   meta:
     name: 'Discord'
     link: 'https://discord.com/'
@@ -40,9 +40,9 @@
         * **Integration configuration** are the specific notification integration required settings, which vary by notification method. For Discord:
           - Define the type channel you want to send notifications to: **Text channel** or **Forum channel**
           - Webhook URL - URL provided on Discord for the channel you want to receive your notifications.
-          - Thread name - if the Discord channel is a **Forum channel** you will need to provide the thread name as well      
+          - Thread name - if the Discord channel is a **Forum channel** you will need to provide the thread name as well
 
-- id: 'notify-pagerduty'
+- id: 'notify-cloud-pagerduty'
   meta:
     name: 'PagerDuty'
     link: 'https://www.pagerduty.com/'
@@ -62,7 +62,7 @@
       - The Netdata Space needs to be on **Business** plan or higher
       - You need to have a PagerDuty service to receive events using webhooks.
 
-      
+
       ### PagerDuty Server Configuration
       Steps to configure your PagerDuty to receive notifications from Netdata:
 
@@ -84,7 +84,7 @@
         * **Integration configuration** are the specific notification integration required settings, which vary by notification method. For PagerDuty:
           - Integration Key - is a 32 character key provided by PagerDuty to receive events on your service.
 
-- id: 'notify-slack'
+- id: 'notify-cloud-slack'
   meta:
     name: 'Slack'
     link: 'https://slack.com/'
@@ -99,14 +99,14 @@
   setup:
     description: |
       ### Prerequisites
-      
+
       - A Netdata Cloud account
       - Access to the Netdata Space as an **administrator**
       - The Netdata Space needs to be on **Business** plan or higher
       - You need to have a Slack app on your workspace to receive the Webhooks.
-      
+
       ### Slack Server Configuration
-      
+
       Steps to configure your Slack to receive notifications from Netdata:
 
       1. Create an app to receive webhook integrations. Check [Create an app](https://api.slack.com/apps?new_app=1) from Slack documentation for further details
@@ -116,7 +116,7 @@
         - At the bottom of **Webhook URLs for Your Workspace** section you have **Add New Webhook to Workspace**
         - After pressing that specify the channel where you want your notifications to be delivered
         - Once completed copy the Webhook URL that you will need to add to your notification configuration on Netdata UI
-      
+
       For more details please check Slacks's article [Incoming webhooks for Slack](https://slack.com/help/articles/115005265063-Incoming-webhooks-for-Slack).
 
       ### Netdata Configuration Steps
@@ -132,8 +132,8 @@
           - Notification - you specify which notifications you want to be notified using this configuration: All Alerts and unreachable, All Alerts, Critical only
         * **Integration configuration** are the specific notification integration required settings, which vary by notification method. For Slack:
           - Webhook URL - URL provided on Slack for the channel you want to receive your notifications.
-          
-- id: 'notify-opsgenie'
+
+- id: 'notify-cloud-opsgenie'
   meta:
     name: 'Opsgenie'
     link: 'https://www.atlassian.com/software/opsgenie'
@@ -149,14 +149,14 @@
   setup:
     description: |
       ### Prerequisites
-      
+
       - A Netdata Cloud account
       - Access to the Netdata Space as an **administrator**
       - The Netdata Space needs to be on **Business** plan or higher
       - You need to have permissions on Opsgenie to add new integrations.
-      
+
       ### Opsgenie Server Configuration
-      
+
       Steps to configure your Opsgenie to receive notifications from Netdata:
 
       1. Go to integrations tab of your team, click **Add integration**
@@ -177,7 +177,7 @@
         * **Integration configuration** are the specific notification integration required settings, which vary by notification method. For Opsgenie:
           - API Key - a key provided on Opsgenie for the channel you want to receive your notifications.
 
-- id: 'notify-mattermost'
+- id: 'notify-cloud-mattermost'
   meta:
     name: 'Mattermost'
     link: 'https://mattermost.com/'
@@ -192,15 +192,15 @@
   setup:
     description: |
       ### Prerequisites
-      
+
       - A Netdata Cloud account
       - Access to the Netdata Space as an **administrator**
       - The Netdata Space needs to be on **Business** plan or higher
       - You need to have permissions on Mattermost to add new integrations.
       - You need to have a Mattermost app on your workspace to receive the webhooks.
-      
+
       ### Mattermost Server Configuration
-      
+
       Steps to configure your Mattermost to receive notifications from Netdata:
 
       1. In Mattermost, go to Product menu > Integrations > Incoming Webhook
@@ -211,7 +211,7 @@
         `https://your-mattermost-server.com/hooks/xxx-generatedkey-xxx`
 
         - Treat this endpoint as a secret. Anyone who has it will be able to post messages to your Mattermost instance.
-            
+
       For more details please check Mattermost's article [Incoming webhooks for Mattermost](https://developers.mattermost.com/integrate/webhooks/incoming/).
 
       ### Netdata Configuration Steps
@@ -227,8 +227,8 @@
           - Notification - you specify which notifications you want to be notified using this configuration: All Alerts and unreachable, All Alerts, Critical only
         * **Integration configuration** are the specific notification integration required settings, which vary by notification method. For Mattermost:
           - Webhook URL - URL provided on Mattermost for the channel you want to receive your notifications
-          
-- id: 'notify-rocketchat'
+
+- id: 'notify-cloud-rocketchat'
   meta:
     name: 'RocketChat'
     link: 'https://www.rocket.chat/'
@@ -243,15 +243,15 @@
   setup:
     description: |
       ### Prerequisites
-      
+
       - A Netdata Cloud account
       - Access to the Netdata Space as an **administrator**
       - The Netdata Space needs to be on **Business** plan or higher
       - You need to have permissions on Mattermost to add new integrations.
       - You need to have a RocketChat app on your workspace to receive the webhooks.
-      
+
       ### Mattermost Server Configuration
-      
+
       Steps to configure your RocketChat to receive notifications from Netdata:
 
       1. In RocketChat, Navigate to Administration > Workspace > Integrations.
@@ -262,7 +262,7 @@
         `https://your-server.rocket.chat/hooks/YYYYYYYYYYYYYYYYYYYYYYYY/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX`
         - Treat this endpoint as a secret. Anyone who has it will be able to post messages to your RocketChat instance.
 
-            
+
       For more details please check RocketChat's article Incoming webhooks for [RocketChat](https://docs.rocket.chat/use-rocket.chat/workspace-administration/integrations/).
 
       ### Netdata Configuration Steps
@@ -278,8 +278,8 @@
           - Notification - you specify which notifications you want to be notified using this configuration: All Alerts and unreachable, All Alerts, Critical only
         * **Integration configuration** are the specific notification integration required settings, which vary by notification method. For RocketChat:
           - Webhook URL - URL provided on RocketChat for the channel you want to receive your notifications.
-          
-- id: 'notify-webhook'
+
+- id: 'notify-cloud-webhook'
   meta:
     name: 'Webhook'
     link: 'https://en.wikipedia.org/wiki/Webhook'
@@ -295,7 +295,7 @@
   setup:
     description: |
       ### Prerequisites
-      
+
       - A Netdata Cloud account
       - Access to the Netdata Space as an **administrator**
       - The Netdata Space needs to be on **Pro** plan or higher
@@ -319,7 +319,7 @@
             * Mutual TLS (recommended) - default authentication mechanism used if no other method is selected.
             * Basic - the client sends a request with an Authorization header that includes a base64-encoded string in the format **username:password**. These will settings will be required inputs.
             * Bearer - the client sends a request with an Authorization header that includes a **bearer token**. This setting will be a required input.
-          
+
 
         ### Webhook service
 
@@ -356,7 +356,7 @@
         When setting up a webhook integration, the user can specify a set of headers to be included in the HTTP requests sent to the webhook URL.
 
         By default, the following headers will be sent in the HTTP request
-        
+
         |            **Header**            | **Value**                 |
         |:-------------------------------:|-----------------------------|
         |     Content-Type             | application/json        |
@@ -372,11 +372,11 @@
         This is the default authentication mechanism used if no other method is selected.
 
         To take advantage of mutual TLS, you can configure your server to verify Netdata's client certificate. In order to achieve this, the Netdata client sending the notification supports mutual TLS (mTLS) to identify itself with a client certificate that your server can validate.
-        
+
         The steps to perform this validation are as follows:
-        
+
         - Store Netdata CA certificate on a file in your disk. The content of this file should be:
-        
+
         <details>
           <summary>Netdata CA certificate</summary>
 
@@ -513,8 +513,7 @@
           response = {
             'response_token': 'sha256=' + base64.b64encode(sha256_hash_digest).decode('ascii')
           }
-          
+
           # returns properly formatted json response
           return json.dumps(response)
         ```
-

+ 40 - 51
integrations/deploy.yaml

@@ -11,28 +11,30 @@
   most_popular: true
   install_description: 'Run the following command on your node to install and claim Netdata:'
   methods:
-    - method: wget
+    - &ks_wget
+      method: wget
       commands:
         - channel: nightly
           command: >
             wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
-            --nightly-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
+            --nightly-channel{% if $showClaimingOptions %} --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}{% /if %}
         - channel: stable
           command: >
             wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
-            --stable-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
-    - method: curl
+            --stable-channel{% if $showClaimingOptions %} --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}{% /if %}
+    - &ks_curl
+      method: curl
       commands:
         - channel: nightly
           command: >
             curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh
-            --nightly-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
+            --nightly-channel{% if $showClaimingOptions %} --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}{% /if %}
         - channel: stable
           command: >
             curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh
-            --stable-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
+            --stable-channel{% if $showClaimingOptions %} --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}{% /if %}
   additional_info: &ref_containers >
-    Did you know you can also deploy Netdata on your OS using {% goToCategory categoryId="deploy.docker-kubernetes" %}Kubernetes{% /goToCategory %} or {% goToCategory categoryId="deploy.docker-kubernetes" %}Docker{% /goToCategory %}?
+    Did you know you can also deploy Netdata on your OS using {% goToCategory navigateToSettings=$navigateToSettings categoryId="deploy.docker-kubernetes" %}Kubernetes{% /goToCategory %} or {% goToCategory categoryId="deploy.docker-kubernetes" %}Docker{% /goToCategory %}?
   related_resources: {}
   platform_info:
     group: ''
@@ -196,16 +198,7 @@
     - apple
   install_description: 'Run the following command on your Intel based OSX, macOS servers to install and claim Netdata:'
   methods:
-    - method: curl
-      commands:
-        - channel: nightly
-          command: >
-            curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh
-            --nightly-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
-        - channel: stable
-          command: >
-            curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh
-            --stable-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
+    - *ks_curl
   additional_info: *ref_containers
   related_resources: {}
   platform_info:
@@ -230,7 +223,6 @@
 
     > Netdata container requires different privileges and mounts to provide functionality similar to that provided by Netdata installed on the host. More info [here](https://learn.netdata.cloud/docs/installing/docker?_gl=1*f2xcnf*_ga*MTI1MTUwMzU0OS4xNjg2NjM1MDA1*_ga_J69Z2JCTFB*MTY5MDMxMDIyMS40MS4xLjE2OTAzMTAzNjkuNTguMC4w#create-a-new-netdata-agent-container)
     > Netdata will use the hostname from the container in which it is run instead of that of the host system. To change the default hostname check [here](https://learn.netdata.cloud/docs/agent/packaging/docker?_gl=1*i5weve*_ga*MTI1MTUwMzU0OS4xNjg2NjM1MDA1*_ga_J69Z2JCTFB*MTY5MDMxMjM4Ny40Mi4xLjE2OTAzMTIzOTAuNTcuMC4w#change-the-default-hostname)
-   
   methods:
     - method: Docker CLI
       commands:
@@ -252,11 +244,12 @@
             --cap-add SYS_PTRACE \
             --cap-add SYS_ADMIN \
             --security-opt apparmor=unconfined \
-            -e NETDATA_CLAIM_TOKEN= {% claim_token %} \
+            {% if $showClaimingOptions %}
+            -e NETDATA_CLAIM_TOKEN={% claim_token %} \
             -e NETDATA_CLAIM_URL={% claim_url %} \
             -e NETDATA_CLAIM_ROOMS={% $claim_rooms %} \
+            {% /if %}
             netdata/netdata:edge
-
         - channel: stable
           command: |
             docker run -d --name=netdata \
@@ -275,9 +268,11 @@
             --cap-add SYS_PTRACE \
             --cap-add SYS_ADMIN \
             --security-opt apparmor=unconfined \
-            -e NETDATA_CLAIM_TOKEN= {% claim_token %} \
+            {% if $showClaimingOptions %}
+            -e NETDATA_CLAIM_TOKEN={% claim_token %} \
             -e NETDATA_CLAIM_URL={% claim_url %} \
             -e NETDATA_CLAIM_ROOMS={% $claim_rooms %} \
+            {% /if %}
             netdata/netdata:stable
     - method: Docker Compose
       commands:
@@ -306,10 +301,12 @@
                   - /sys:/host/sys:ro
                   - /etc/os-release:/host/etc/os-release:ro
                   - /var/run/docker.sock:/var/run/docker.sock:ro
+                {% if $showClaimingOptions %}
                 environment:
                   - NETDATA_CLAIM_TOKEN={% claim_token %}
                   - NETDATA_CLAIM_URL={% claim_url %}
                   - NETDATA_CLAIM_ROOMS={% $claim_rooms %}
+                {% /if %}
             volumes:
               netdataconfig:
               netdatalib:
@@ -339,10 +336,12 @@
                   - /sys:/host/sys:ro
                   - /etc/os-release:/host/etc/os-release:ro
                   - /var/run/docker.sock:/var/run/docker.sock:ro
+                {% if $showClaimingOptions %}
                 environment:
                   - NETDATA_CLAIM_TOKEN={% claim_token %}
                   - NETDATA_CLAIM_URL={% claim_url %}
                   - NETDATA_CLAIM_ROOMS={% $claim_rooms %}
+                {% /if %}
             volumes:
               netdataconfig:
               netdatalib:
@@ -444,23 +443,23 @@
         - channel: nightly
           command: |
             helm install netdata netdata/netdata \
-            --set image.tag=latest \
+            --set image.tag=latest{% if $showClaimingOptions %} \
             --set parent.claiming.enabled="true" \
             --set parent.claiming.token={% claim_token %} \
             --set parent.claiming.rooms={% $claim_rooms %} \
             --set child.claiming.enabled="true" \
             --set child.claiming.token={% claim_token %} \
-            --set child.claiming.rooms={% $claim_rooms %}
+            --set child.claiming.rooms={% $claim_rooms %}{% /if %}
         - channel: stable
           command: |
             helm install netdata netdata/netdata \
-            --set image.tag=stable \
+            --set image.tag=stable{% if $showClaimingOptions %} \
             --set parent.claiming.enabled="true" \
             --set parent.claiming.token={% claim_token %} \
             --set parent.claiming.rooms={% $claim_rooms %} \
             --set child.claiming.enabled="true" \
             --set child.claiming.token={% claim_token %} \
-            --set child.claiming.rooms={% $claim_rooms %}
+            --set child.claiming.rooms={% $claim_rooms %}{% /if %}
     - method: Existing Cluster
       commands:
         - channel: nightly
@@ -470,6 +469,7 @@
 
             restarter:
               enabled: true
+            {% if $showClaimingOptions %}
 
             parent:
               claiming:
@@ -482,11 +482,16 @@
                 enabled: true
                 token: {% claim_token %}
                 rooms: {% $claim_rooms %}
+            {% /if %}
         - channel: stable
           command: |
             image:
               tag: stable
 
+            restarter:
+              enabled: true
+            {% if $showClaimingOptions %}
+
             parent:
               claiming:
                 enabled: true
@@ -498,6 +503,7 @@
                 enabled: true
                 token: {% claim_token %}
                 rooms: {% $claim_rooms %}
+            {% /if %}
   additional_info: ''
   related_resources: {}
   most_popular: true
@@ -520,26 +526,8 @@
     3. Configure Netdata to collect data remotely from your Windows hosts by adding one job per host to windows.conf file. See the [configuration section](https://learn.netdata.cloud/docs/data-collection/monitor-anything/System%20Metrics/Windows-machines#configuration) for details.
     4. Enable [virtual nodes](https://learn.netdata.cloud/docs/data-collection/windows-systems#virtual-nodes) configuration so the windows nodes are displayed as separate nodes.
   methods:
-    - method: wget
-      commands:
-        - channel: nightly
-          command: >
-            wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
-            --nightly-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
-        - channel: stable
-          command: >
-            wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
-            --stable-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
-    - method: curl
-      commands:
-        - channel: nightly
-          command: >
-            curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh
-            --nightly-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
-        - channel: stable
-          command: >
-            curl https://my-netdata.io/kickstart.sh > /tmp/netdata-kickstart.sh && sh /tmp/netdata-kickstart.sh
-            --stable-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
+    - *ks_wget
+    - *ks_curl
   additional_info: ''
   related_resources: {}
   most_popular: true
@@ -563,19 +551,20 @@
 
     ```pkg install bash e2fsprogs-libuuid git curl autoconf automake pkgconf pidof liblz4 libuv json-c cmake gmake```
     This step needs root privileges. Please respond in the affirmative for any relevant prompts during the installation process.
-    
+
     Run the following command on your node to install and claim Netdata:
   methods:
-    - method: wget
+    - *ks_curl
+    - method: fetch
       commands:
         - channel: nightly
           command: >
-            wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
-            --nightly-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
+            fetch -o /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
+            --nightly-channel{% if $showClaimingOptions %} --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}{% /if %}
         - channel: stable
           command: >
-            wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
-            --stable-channel --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}
+            fetch -o /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh
+            --stable-channel{% if $showClaimingOptions %} --claim-token {% claim_token %} --claim-rooms {% $claim_rooms %} --claim-url {% claim_url %}{% /if %}
   additional_info: |
     Netdata can also be installed via [FreeBSD ports](https://www.freshports.org/net-mgmt/netdata).
   related_resources: {}

Some files were not shown because too many files changed in this diff