Browse Source

Remove obsolete python modules (#5659)

##### Summary

Fixes: #5647

___

Remove obolete python modules:
 - cpuidle (moved to proc plugin #4635)
 - cpufreq (moved to proc plugin #4562)
 - mdstat (moved to proc plugin #4768)
 - linux_power_supply (moved to proc plugin #4960)

##### Component Name

[/collectors/python.d.plugin/](https://github.com/netdata/netdata/tree/master/collectors/python.d.plugin)

##### Additional Information
Ilya Mashchenko 6 years ago
parent
commit
3777b91736

+ 5 - 5
README.md

@@ -186,7 +186,7 @@ The key improvements are:
 
 - Improved internal database to support values above 64bit.
 - New data collection plugins: [`openldap`](collectors/python.d.plugin/openldap/), [`tor`](collectors/python.d.plugin/tor/), [`nvidia_smi`](collectors/python.d.plugin/nvidia_smi/).
-- Improved data collection plugins: netdata now supports monitoring network interface aliases, [`smartd_log`](collectors/python.d.plugin/smartd_log/), [`cpufreq`](collectors/python.d.plugin/cpufreq/), [`sensors`](collectors/python.d.plugin/sensors/).
+- Improved data collection plugins: netdata now supports monitoring network interface aliases, [`smartd_log`](collectors/python.d.plugin/smartd_log/), [`cpufreq`](collectors/proc.plugin/README.md#cpu-frequency), [`sensors`](collectors/python.d.plugin/sensors/).
 - Health monitoring improvements: network interface congestion alarm restored, [`alerta.io`](health/notifications/alerta/), `conntrack_max`.
 - `my-netdata`menu has been refactored. 
 - Packaging: `openrc` service definition got a few improvements.
@@ -317,8 +317,8 @@ Its [Plugin API](collectors/plugins.d/) supports all programing languages (anyth
 - **[SoftIRQs](collectors/proc.plugin/)** - total and per core SoftIRQs.
 - **[SoftNet](collectors/proc.plugin/)** - total and per core SoftIRQs related to network activity.
 - **[CPU Throttling](collectors/proc.plugin/)** - collects per core CPU throttling.
-- **[CPU Frequency](collectors/python.d.plugin/couchdb/)** - collects the current CPU frequency.
-- **[CPU Idle](collectors/python.d.plugin/cpuidle/)** - collects the time spent per processor state.
+- **[CPU Frequency](collectors/proc.plugin/)** - collects the current CPU frequency.
+- **[CPU Idle](collectors/proc.plugin/)** - collects the time spent per processor state.
 - **[IdleJitter](collectors/idlejitter.plugin/)** - measures CPU latency.
 - **[Entropy](collectors/proc.plugin/)** - random numbers pool, using in cryptography.
 - **[Interprocess Communication - IPC](collectors/proc.plugin/)** - such as semaphores and semaphores arrays.
@@ -339,7 +339,7 @@ Its [Plugin API](collectors/plugins.d/) supports all programing languages (anyth
 - **[block devices](collectors/proc.plugin/)** - per disk: I/O, operations, backlog, utilization, space, etc.  
 - **[BCACHE](collectors/proc.plugin/)** - detailed performance of SSD caching devices.
 - **[DiskSpace](collectors/proc.plugin/)** - monitors disk space usage.
-- **[mdstat](collectors/python.d.plugin/mdstat/)** - software RAID.
+- **[mdstat](collectors/proc.plugin/)** - software RAID.
 - **[hddtemp](collectors/python.d.plugin/hddtemp/)** - disk temperatures.
 - **[smartd](collectors/python.d.plugin/smartd_log/)** - disk S.M.A.R.T. values.
 - **[device mapper](collectors/proc.plugin/)** - naming disks.
@@ -448,7 +448,7 @@ Its [Plugin API](collectors/plugins.d/) supports all programing languages (anyth
 #### UPSes
 - **[apcupsd](collectors/charts.d.plugin/apcupsd/)** - load, charge, battery voltage, temperature, utility metrics, output metrics
 - **[NUT](collectors/charts.d.plugin/nut/)** - load, charge, battery voltage, temperature, utility metrics, output metrics
-- **[Linux Power Supply](collectors/python.d.plugin/linux_power_supply/)** - collects metrics reported by power supply drivers on Linux.
+- **[Linux Power Supply](collectors/proc.plugin/)** - collects metrics reported by power supply drivers on Linux.
 
 #### Social Sharing Servers
 - **[RetroShare](collectors/python.d.plugin/retroshare/)** - connects to multiple retroshare servers (local or remote) to collect real-time performance metrics.

+ 0 - 4
collectors/python.d.plugin/Makefile.am

@@ -44,8 +44,6 @@ include boinc/Makefile.inc
 include ceph/Makefile.inc
 include chrony/Makefile.inc
 include couchdb/Makefile.inc
-include cpufreq/Makefile.inc
-include cpuidle/Makefile.inc
 include dnsdist/Makefile.inc
 include dns_query_time/Makefile.inc
 include dockerd/Makefile.inc
@@ -62,10 +60,8 @@ include httpcheck/Makefile.inc
 include icecast/Makefile.inc
 include ipfs/Makefile.inc
 include isc_dhcpd/Makefile.inc
-include linux_power_supply/Makefile.inc
 include litespeed/Makefile.inc
 include logind/Makefile.inc
-include mdstat/Makefile.inc
 include megacli/Makefile.inc
 include memcached/Makefile.inc
 include mongodb/Makefile.inc

+ 0 - 13
collectors/python.d.plugin/cpufreq/Makefile.inc

@@ -1,13 +0,0 @@
-# SPDX-License-Identifier: GPL-3.0-or-later
-
-# THIS IS NOT A COMPLETE Makefile
-# IT IS INCLUDED BY ITS PARENT'S Makefile.am
-# IT IS REQUIRED TO REFERENCE ALL FILES RELATIVE TO THE PARENT
-
-# install these files
-dist_python_DATA       += cpufreq/cpufreq.chart.py
-dist_pythonconfig_DATA += cpufreq/cpufreq.conf
-
-# do not install these files, but include them in the distribution
-dist_noinst_DATA       += cpufreq/README.md cpufreq/Makefile.inc
-

+ 0 - 37
collectors/python.d.plugin/cpufreq/README.md

@@ -1,37 +0,0 @@
-# cpufreq
-
-> THIS MODULE IS OBSOLETE.
-> USE THE [PROC PLUGIN](../../proc.plugin) - IT IS MORE EFFICIENT
-
----
-
-This module shows the current CPU frequency as set by the cpufreq kernel
-module.
-
-**Requirement:**
-You need to have `CONFIG_CPU_FREQ` and (optionally) `CONFIG_CPU_FREQ_STAT`
-enabled in your kernel.
-
-This module tries to read from one of two possible locations. On
-initialization, it tries to read the `time_in_state` files provided by
-cpufreq\_stats. If this file does not exist, or doesn't contain valid data, it
-falls back to using the more inaccurate `scaling_cur_freq` file (which only
-represents the **current** CPU frequency, and doesn't account for any state
-changes which happen between updates).
-
-It produces one chart with multiple lines (one line per core).
-
-### configuration
-
-Sample:
-
-```yaml
-sys_dir: "/sys/devices"
-```
-
-If no configuration is given, module will search for cpufreq files in `/sys/devices` directory.
-Directory is also prefixed with `NETDATA_HOST_PREFIX` if specified.
-
----
-
-[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fcollectors%2Fpython.d.plugin%2Fcpufreq%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

+ 0 - 115
collectors/python.d.plugin/cpufreq/cpufreq.chart.py

@@ -1,115 +0,0 @@
-# -*- coding: utf-8 -*-
-# Description: cpufreq netdata python.d module
-# Author: Pawel Krupa (paulfantom)
-# Author: Steven Noonan (tycho)
-# SPDX-License-Identifier: GPL-3.0-or-later
-
-import glob
-import os
-
-from bases.FrameworkServices.SimpleService import SimpleService
-
-# default module values (can be overridden per job in `config`)
-# update_every = 2
-
-ORDER = ['cpufreq']
-
-CHARTS = {
-    'cpufreq': {
-        'options': [None, 'CPU Clock', 'MHz', 'cpufreq', 'cpufreq.cpufreq', 'line'],
-        'lines': [
-            # lines are created dynamically in `check()` method
-        ]
-    }
-}
-
-
-class Service(SimpleService):
-    def __init__(self, configuration=None, name=None):
-        prefix = os.getenv('NETDATA_HOST_PREFIX', "")
-        if prefix.endswith('/'):
-            prefix = prefix[:-1]
-        self.sys_dir = prefix + "/sys/devices"
-        SimpleService.__init__(self, configuration=configuration, name=name)
-        self.order = ORDER
-        self.definitions = CHARTS
-        self.fake_name = 'cpu'
-        self.assignment = {}
-        self.accurate_exists = True
-        self.accurate_last = {}
-
-    def _get_data(self):
-        data = {}
-
-        if self.accurate_exists:
-            accurate_ok = True
-
-            for name, paths in self.assignment.items():
-                last = self.accurate_last[name]
-
-                current = {}
-                deltas = {}
-                ticks_since_last = 0
-
-                for line in open(paths['accurate'], 'r'):
-                    line = list(map(int, line.split()))
-                    current[line[0]] = line[1]
-                    ticks = line[1] - last.get(line[0], 0)
-                    ticks_since_last += ticks
-                    deltas[line[0]] = line[1] - last.get(line[0], 0)
-
-                avg_freq = 0
-                if ticks_since_last != 0:
-                    for frequency, ticks in deltas.items():
-                        avg_freq += frequency * ticks
-                    avg_freq /= ticks_since_last
-
-                data[name] = avg_freq
-                self.accurate_last[name] = current
-                if avg_freq == 0 or ticks_since_last == 0:
-                    # Delta is either too large or nonexistent, fall back to
-                    # less accurate reading. This can happen if we switch
-                    # to/from the 'schedutil' governor, which doesn't report
-                    # stats.
-                    accurate_ok = False
-
-            if accurate_ok:
-                return data
-
-        for name, paths in self.assignment.items():
-            data[name] = open(paths['inaccurate'], 'r').read()
-
-        return data
-
-    def check(self):
-        try:
-            self.sys_dir = str(self.configuration['sys_dir'])
-        except (KeyError, TypeError):
-            self.error("No path specified. Using: '" + self.sys_dir + "'")
-
-        for path in glob.glob(self.sys_dir + '/system/cpu/cpu*/cpufreq/stats/time_in_state'):
-            path_elem = path.split('/')
-            cpu = path_elem[-4]
-            if cpu not in self.assignment:
-                self.assignment[cpu] = {}
-            self.assignment[cpu]['accurate'] = path
-            self.accurate_last[cpu] = {}
-
-        if not self.assignment:
-            self.accurate_exists = False
-
-        for path in glob.glob(self.sys_dir + '/system/cpu/cpu*/cpufreq/scaling_cur_freq'):
-            path_elem = path.split('/')
-            cpu = path_elem[-3]
-            if cpu not in self.assignment:
-                self.assignment[cpu] = {}
-            self.assignment[cpu]['inaccurate'] = path
-
-        if not self.assignment:
-            self.error("couldn't find a method to read cpufreq statistics")
-            return False
-
-        for name in sorted(self.assignment, key=lambda v: int(v[3:])):
-            self.definitions[ORDER[0]]['lines'].append([name, name, 'absolute', 1, 1000])
-
-        return True

+ 0 - 41
collectors/python.d.plugin/cpufreq/cpufreq.conf

@@ -1,41 +0,0 @@
-# netdata python.d.plugin configuration for cpufreq
-#
-# This file is in YaML format. Generally the format is:
-#
-# name: value
-#
-# There are 2 sections:
-#  - global variables
-#  - one or more JOBS
-#
-# JOBS allow you to collect values from multiple sources.
-# Each source will have its own set of charts.
-#
-# JOB parameters have to be indented (using spaces only, example below).
-
-# ----------------------------------------------------------------------
-# Global Variables
-# These variables set the defaults for all JOBs, however each JOB
-# may define its own, overriding the defaults.
-
-# update_every sets the default data collection frequency.
-# If unset, the python.d.plugin default is used.
-# update_every: 1
-
-# priority controls the order of charts at the netdata dashboard.
-# Lower numbers move the charts towards the top of the page.
-# If unset, the default for python.d.plugin is used.
-# priority: 60000
-
-# penalty indicates whether to apply penalty to update_every in case of failures.
-# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
-# penalty: yes
-
-# autodetection_retry sets the job re-check interval in seconds.
-# The job is not deleted if check fails.
-# Attempts to start the job are made once every autodetection_retry.
-# This feature is disabled by default.
-# autodetection_retry: 0
-
-# The directory to search for the file scaling_cur_freq
-sys_dir: "/sys/devices"

+ 0 - 13
collectors/python.d.plugin/cpuidle/Makefile.inc

@@ -1,13 +0,0 @@
-# SPDX-License-Identifier: GPL-3.0-or-later
-
-# THIS IS NOT A COMPLETE Makefile
-# IT IS INCLUDED BY ITS PARENT'S Makefile.am
-# IT IS REQUIRED TO REFERENCE ALL FILES RELATIVE TO THE PARENT
-
-# install these files
-dist_python_DATA       += cpuidle/cpuidle.chart.py
-dist_pythonconfig_DATA += cpuidle/cpuidle.conf
-
-# do not install these files, but include them in the distribution
-dist_noinst_DATA       += cpuidle/README.md cpuidle/Makefile.inc
-

+ 0 - 13
collectors/python.d.plugin/cpuidle/README.md

@@ -1,13 +0,0 @@
-# cpuidle
-
-This module monitors the usage of CPU idle states.
-
-**Requirement:**
-Your kernel needs to have `CONFIG_CPU_IDLE` enabled.
-
-It produces one stacked chart per CPU, showing the percentage of time spent in
-each state.
-
----
-
-[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fcollectors%2Fpython.d.plugin%2Fcpuidle%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()

+ 0 - 148
collectors/python.d.plugin/cpuidle/cpuidle.chart.py

@@ -1,148 +0,0 @@
-# -*- coding: utf-8 -*-
-# Description: cpuidle netdata python.d module
-# Author: Steven Noonan (tycho)
-# SPDX-License-Identifier: GPL-3.0-or-later
-
-import ctypes
-import glob
-import os
-import platform
-
-from bases.FrameworkServices.SimpleService import SimpleService
-
-syscall = ctypes.CDLL('libc.so.6').syscall
-
-# default module values (can be overridden per job in `config`)
-# update_every = 2
-
-
-class Service(SimpleService):
-    def __init__(self, configuration=None, name=None):
-        prefix = os.getenv('NETDATA_HOST_PREFIX', "")
-        if prefix.endswith('/'):
-            prefix = prefix[:-1]
-        self.sys_dir = prefix + "/sys/devices/system/cpu"
-        self.schedstat_path = prefix + "/proc/schedstat"
-        SimpleService.__init__(self, configuration=configuration, name=name)
-        self.order = []
-        self.definitions = {}
-        self.fake_name = 'cpu'
-        self.assignment = {}
-        self.last_schedstat = None
-
-    @staticmethod
-    def __gettid():
-        # This is horrendous. We need the *thread id* (not the *process id*),
-        # but there's no Python standard library way of doing that. If you need
-        # to enable this module on a non-x86 machine type, you'll have to find
-        # the Linux syscall number for gettid() and add it to the dictionary
-        # below.
-        syscalls = {
-            'i386':    224,
-            'x86_64':  186,
-        }
-        if platform.machine() not in syscalls:
-            return None
-        tid = syscall(syscalls[platform.machine()])
-        return tid
-
-    def __wake_cpus(self, cpus):
-        # Requires Python 3.3+. This will "tickle" each CPU to force it to
-        # update its idle counters.
-        if hasattr(os, 'sched_setaffinity'):
-            pid = self.__gettid()
-            save_affinity = os.sched_getaffinity(pid)
-            for idx in cpus:
-                os.sched_setaffinity(pid, [idx])
-                os.sched_getaffinity(pid)
-            os.sched_setaffinity(pid, save_affinity)
-
-    def __read_schedstat(self):
-        cpus = {}
-        for line in open(self.schedstat_path, 'r'):
-            if not line.startswith('cpu'):
-                continue
-            line = line.rstrip().split()
-            cpu = line[0]
-            active_time = line[7]
-            cpus[cpu] = int(active_time) // 1000
-        return cpus
-
-    def _get_data(self):
-        results = {}
-
-        # Use the kernel scheduler stats to determine how much time was spent
-        # in C0 (active).
-        schedstat = self.__read_schedstat()
-
-        # Determine if any of the CPUs are idle. If they are, then we need to
-        # tickle them in order to update their C-state residency statistics.
-        if self.last_schedstat is None:
-            needs_tickle = list(self.assignment.keys())
-        else:
-            needs_tickle = []
-            for cpu, active_time in self.last_schedstat.items():
-                delta = schedstat[cpu] - active_time
-                if delta < 1:
-                    needs_tickle.append(cpu)
-
-        if needs_tickle:
-            # This line is critical for the stats to update. If we don't "tickle"
-            # idle CPUs, then the counters for those CPUs stop counting.
-            self.__wake_cpus([int(cpu[3:]) for cpu in needs_tickle])
-
-            # Re-read schedstat now that we've tickled any idlers.
-            schedstat = self.__read_schedstat()
-
-        self.last_schedstat = schedstat
-
-        for cpu, metrics in self.assignment.items():
-            update_time = schedstat[cpu]
-            results[cpu + '_active_time'] = update_time
-
-            for metric, path in metrics.items():
-                residency = int(open(path, 'r').read())
-                results[metric] = residency
-
-        return results
-
-    def check(self):
-        if self.__gettid() is None:
-            self.error('Cannot get thread ID. Stats would be completely broken.')
-            return False
-
-        for path in sorted(glob.glob(self.sys_dir + '/cpu*/cpuidle/state*/name')):
-            # ['', 'sys', 'devices', 'system', 'cpu', 'cpu0', 'cpuidle', 'state3', 'name']
-            path_elem = path.split('/')
-            cpu = path_elem[-4]
-            state = path_elem[-2]
-            statename = open(path, 'rt').read().rstrip()
-
-            orderid = '%s_cpuidle' % (cpu,)
-            if orderid not in self.definitions:
-                self.order.append(orderid)
-                active_name = '%s_active_time' % (cpu,)
-                self.definitions[orderid] = {
-                    'options': [None, 'C-state residency', 'time%', 'cpuidle', 'cpuidle.cpuidle', 'stacked'],
-                    'lines': [
-                        [active_name, 'C0 (active)', 'percentage-of-incremental-row', 1, 1],
-                    ],
-                }
-                self.assignment[cpu] = {}
-
-            defid = '%s_%s_time' % (orderid, state)
-
-            self.definitions[orderid]['lines'].append(
-                [defid, statename, 'percentage-of-incremental-row', 1, 1]
-            )
-
-            self.assignment[cpu][defid] = '/'.join(path_elem[:-1] + ['time'])
-
-        # Sort order by kernel-specified CPU index
-        self.order.sort(key=lambda x: int(x.split('_')[0][3:]))
-
-        if not self.definitions:
-            self.error("couldn't find cstate stats")
-            return False
-
-        return True

+ 0 - 38
collectors/python.d.plugin/cpuidle/cpuidle.conf

@@ -1,38 +0,0 @@
-# netdata python.d.plugin configuration for cpuidle
-#
-# This file is in YaML format. Generally the format is:
-#
-# name: value
-#
-# There are 2 sections:
-#  - global variables
-#  - one or more JOBS
-#
-# JOBS allow you to collect values from multiple sources.
-# Each source will have its own set of charts.
-#
-# JOB parameters have to be indented (using spaces only, example below).
-
-# ----------------------------------------------------------------------
-# Global Variables
-# These variables set the defaults for all JOBs, however each JOB
-# may define its own, overriding the defaults.
-
-# update_every sets the default data collection frequency.
-# If unset, the python.d.plugin default is used.
-# update_every: 1
-
-# priority controls the order of charts at the netdata dashboard.
-# Lower numbers move the charts towards the top of the page.
-# If unset, the default for python.d.plugin is used.
-# priority: 60000
-
-# penalty indicates whether to apply penalty to update_every in case of failures.
-# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
-# penalty: yes
-
-# autodetection_retry sets the job re-check interval in seconds.
-# The job is not deleted if check fails.
-# Attempts to start the job are made once every autodetection_retry.
-# This feature is disabled by default.
-# autodetection_retry: 0

Some files were not shown because too many files changed in this diff