# AdaptecRAID
Plugin: python.d.plugin
Module: adaptec_raid
## Overview
This collector monitors Adaptec RAID hardware storage controller metrics about both physical and logical drives.
It uses the arcconf command line utility (from adaptec) to monitor your raid controller.
Executed commands:
- `sudo -n arcconf GETCONFIG 1 LD`
- `sudo -n arcconf GETCONFIG 1 PD`
This collector is supported on all platforms.
This collector only supports collecting metrics from a single instance of this integration.
The module uses arcconf, which can only be executed by root. It uses sudo and assumes that it is configured such that the netdata user can execute arcconf as root without a password.
### Default Behavior
#### Auto-Detection
After all the permissions are satisfied, netdata should be to execute commands via the arcconf command line utility
#### Limits
The default configuration for this integration does not impose any limits on data collection.
#### Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
## Metrics
Metrics grouped by *scope*.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
### Per AdaptecRAID instance
These metrics refer to the entire monitored application.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|:------|:----------|:----|
| adaptec_raid.ld_status | a dimension per logical device | bool |
| adaptec_raid.pd_state | a dimension per physical device | bool |
| adaptec_raid.smart_warnings | a dimension per physical device | count |
| adaptec_raid.temperature | a dimension per physical device | celsius |
## Alerts
The following alerts are available:
| Alert name | On metric | Description |
|:------------|:----------|:------------|
| [ adaptec_raid_ld_status ](https://github.com/netdata/netdata/blob/master/health/health.d/adaptec_raid.conf) | adaptec_raid.ld_status | logical device status is failed or degraded |
| [ adaptec_raid_pd_state ](https://github.com/netdata/netdata/blob/master/health/health.d/adaptec_raid.conf) | adaptec_raid.pd_state | physical device state is not online |
## Setup
### Prerequisites
#### Grant permissions for netdata, to run arcconf as sudoer
The module uses arcconf, which can only be executed by root. It uses sudo and assumes that it is configured such that the netdata user can execute arcconf as root without a password.
Add to your /etc/sudoers file:
which arcconf shows the full path to the binary.
```bash
netdata ALL=(root) NOPASSWD: /path/to/arcconf
```
#### Reset Netdata's systemd unit CapabilityBoundingSet (Linux distributions with systemd)
The default CapabilityBoundingSet doesn't allow using sudo, and is quite strict in general. Resetting is not optimal, but a next-best solution given the inability to execute arcconf using sudo.
As root user, do the following:
```bash
mkdir /etc/systemd/system/netdata.service.d
echo -e '[Service]\nCapabilityBoundingSet=~' | tee /etc/systemd/system/netdata.service.d/unset-capability-bounding-set.conf
systemctl daemon-reload
systemctl restart netdata.service
```
### Configuration
#### File
The configuration file name for this integration is `python.d/adaptec_raid.conf`.
You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config python.d/adaptec_raid.conf
```
#### Options
There are 2 sections:
* Global variables
* One or more JOBS that can define multiple different instances to monitor.
The following options can be defined globally: priority, penalty, autodetection_retry, update_every, but can also be defined per JOB to override the global values.
Additionally, the following collapsed table contains all the options that can be configured inside a JOB definition.
Every configuration JOB starts with a `job_name` value which will appear in the dashboard, unless a `name` parameter is specified.
Config options
| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update_every | Sets the default data collection frequency. | 5 | no |
| priority | Controls the order of charts at the netdata dashboard. | 60000 | no |
| autodetection_retry | Sets the job re-check interval in seconds. | 0 | no |
| penalty | Indicates whether to apply penalty to update_every in case of failures. | yes | no |
#### Examples
##### Basic
A basic example configuration per job
```yaml
job_name:
name: my_job_name
update_every: 1 # the JOB's data collection frequency
priority: 60000 # the JOB's order on the dashboard
penalty: yes # the JOB's penalty
autodetection_retry: 0 # the JOB's re-check interval in seconds
```
## Troubleshooting
### Debug Mode
To troubleshoot issues with the `adaptec_raid` collector, run the `python.d.plugin` with the debug option enabled. The output
should give you clues as to why the collector isn't working.
- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on
your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.
```bash
cd /usr/libexec/netdata/plugins.d/
```
- Switch to the `netdata` user.
```bash
sudo -u netdata -s
```
- Run the `python.d.plugin` to debug the collector:
```bash
./python.d.plugin adaptec_raid debug trace
```