Directory : /var/lib/munin/ |
|
Current File : //var/lib/munin/datafile |
version 2.0.73
beatq.tech;nc-ph-2432.beatq.tech:vmstat.graph_title VMstat
beatq.tech;nc-ph-2432.beatq.tech:vmstat.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:vmstat.graph_vlabel process states
beatq.tech;nc-ph-2432.beatq.tech:vmstat.graph_category processes
beatq.tech;nc-ph-2432.beatq.tech:vmstat.graph_order wait sleep
beatq.tech;nc-ph-2432.beatq.tech:vmstat.sleep.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:vmstat.sleep.max 500000
beatq.tech;nc-ph-2432.beatq.tech:vmstat.sleep.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:vmstat.sleep.label I/O sleep
beatq.tech;nc-ph-2432.beatq.tech:vmstat.sleep.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:vmstat.wait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:vmstat.wait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:vmstat.wait.max 500000
beatq.tech;nc-ph-2432.beatq.tech:vmstat.wait.label running
beatq.tech;nc-ph-2432.beatq.tech:vmstat.wait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.graph_order down up down up
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.graph_title eth1 traffic
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.graph_args --base 1000
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.graph_vlabel bits in (-) / out (+) per ${graph_period}
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.graph_category network
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.graph_info This graph shows the traffic of the eth1 network interface. Please note that the traffic is shown in bits per second, not bytes. IMPORTANT: On 32-bit systems the data source for this plugin uses 32-bit counters, which makes the plugin unreliable and unsuitable for most 100-Mb/s (or faster) interfaces, where traffic is expected to exceed 50 Mb/s over a 5 minute period. This means that this plugin is unsuitable for most 32-bit production environments. To avoid this problem, use the ip_ plugin instead. There should be no problems on 64-bit systems running 64-bit kernels.
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.up.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.up.info Traffic of the eth1 interface. Maximum speed is 1000 Mb/s.
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.up.label bps
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.up.max 1000000000
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.up.min 0
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.up.negative down
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.up.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.up.cdef up,8,*
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.up.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.down.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.down.label received
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.down.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.down.max 1000000000
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.down.min 0
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.down.cdef down,8,*
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.down.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:if_eth1.down.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0.graph_title Disk utilization for /dev/loop0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0.graph_vlabel % busy
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0.graph_scale no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0.graph_order util
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0.util.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0.util.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0.util.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0.util.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0.util.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0.util.label Utilization
beatq.tech;nc-ph-2432.beatq.tech:threads.graph_title Number of threads
beatq.tech;nc-ph-2432.beatq.tech:threads.graph_vlabel number of threads
beatq.tech;nc-ph-2432.beatq.tech:threads.graph_category processes
beatq.tech;nc-ph-2432.beatq.tech:threads.graph_info This graph shows the number of threads.
beatq.tech;nc-ph-2432.beatq.tech:threads.graph_order threads
beatq.tech;nc-ph-2432.beatq.tech:threads.threads.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:threads.threads.info The current number of threads.
beatq.tech;nc-ph-2432.beatq.tech:threads.threads.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:threads.threads.label threads
beatq.tech;nc-ph-2432.beatq.tech:memory.graph_args --base 1024 -l 0 --upper-limit 66875002880
beatq.tech;nc-ph-2432.beatq.tech:memory.graph_vlabel Bytes
beatq.tech;nc-ph-2432.beatq.tech:memory.graph_title Memory usage
beatq.tech;nc-ph-2432.beatq.tech:memory.graph_category system
beatq.tech;nc-ph-2432.beatq.tech:memory.graph_info This graph shows what the machine uses memory for.
beatq.tech;nc-ph-2432.beatq.tech:memory.graph_order apps page_tables per_cpu swap_cache slab shmem cached buffers free swap apps buffers swap cached free shmem slab swap_cache page_tables per_cpu vmalloc_used committed mapped active inactive
beatq.tech;nc-ph-2432.beatq.tech:memory.buffers.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:memory.buffers.colour COLOUR5
beatq.tech;nc-ph-2432.beatq.tech:memory.buffers.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:memory.buffers.label buffers
beatq.tech;nc-ph-2432.beatq.tech:memory.buffers.info Block device (e.g. harddisk) cache. Also where "dirty" blocks are stored until written.
beatq.tech;nc-ph-2432.beatq.tech:memory.buffers.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:memory.swap.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:memory.swap.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:memory.swap.colour COLOUR7
beatq.tech;nc-ph-2432.beatq.tech:memory.swap.info Swap space used.
beatq.tech;nc-ph-2432.beatq.tech:memory.swap.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:memory.swap.label swap
beatq.tech;nc-ph-2432.beatq.tech:memory.apps.colour COLOUR0
beatq.tech;nc-ph-2432.beatq.tech:memory.apps.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:memory.apps.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:memory.apps.label apps
beatq.tech;nc-ph-2432.beatq.tech:memory.apps.draw AREA
beatq.tech;nc-ph-2432.beatq.tech:memory.apps.info Memory used by user-space applications.
beatq.tech;nc-ph-2432.beatq.tech:memory.free.colour COLOUR6
beatq.tech;nc-ph-2432.beatq.tech:memory.free.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:memory.free.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:memory.free.label unused
beatq.tech;nc-ph-2432.beatq.tech:memory.free.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:memory.free.info Wasted memory. Memory that is not used for anything at all.
beatq.tech;nc-ph-2432.beatq.tech:memory.cached.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:memory.cached.colour COLOUR4
beatq.tech;nc-ph-2432.beatq.tech:memory.cached.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:memory.cached.label cache
beatq.tech;nc-ph-2432.beatq.tech:memory.cached.info Parked file data (file content) cache.
beatq.tech;nc-ph-2432.beatq.tech:memory.cached.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:memory.active.info Memory recently used. Not reclaimed unless absolutely necessary.
beatq.tech;nc-ph-2432.beatq.tech:memory.active.draw LINE2
beatq.tech;nc-ph-2432.beatq.tech:memory.active.label active
beatq.tech;nc-ph-2432.beatq.tech:memory.active.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:memory.active.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:memory.active.colour COLOUR12
beatq.tech;nc-ph-2432.beatq.tech:memory.inactive.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:memory.inactive.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:memory.inactive.colour COLOUR15
beatq.tech;nc-ph-2432.beatq.tech:memory.inactive.draw LINE2
beatq.tech;nc-ph-2432.beatq.tech:memory.inactive.info Memory not currently used.
beatq.tech;nc-ph-2432.beatq.tech:memory.inactive.label inactive
beatq.tech;nc-ph-2432.beatq.tech:memory.committed.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:memory.committed.colour COLOUR10
beatq.tech;nc-ph-2432.beatq.tech:memory.committed.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:memory.committed.label committed
beatq.tech;nc-ph-2432.beatq.tech:memory.committed.draw LINE2
beatq.tech;nc-ph-2432.beatq.tech:memory.committed.info The amount of memory allocated to programs. Overcommitting is normal, but may indicate memory leaks.
beatq.tech;nc-ph-2432.beatq.tech:memory.per_cpu.label per_cpu
beatq.tech;nc-ph-2432.beatq.tech:memory.per_cpu.info Per CPU allocations
beatq.tech;nc-ph-2432.beatq.tech:memory.per_cpu.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:memory.per_cpu.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:memory.per_cpu.colour COLOUR20
beatq.tech;nc-ph-2432.beatq.tech:memory.per_cpu.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:memory.shmem.info Shared Memory (SYSV SHM segments, tmpfs).
beatq.tech;nc-ph-2432.beatq.tech:memory.shmem.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:memory.shmem.label shmem
beatq.tech;nc-ph-2432.beatq.tech:memory.shmem.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:memory.shmem.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:memory.shmem.colour COLOUR9
beatq.tech;nc-ph-2432.beatq.tech:memory.swap_cache.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:memory.swap_cache.colour COLOUR2
beatq.tech;nc-ph-2432.beatq.tech:memory.swap_cache.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:memory.swap_cache.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:memory.swap_cache.info A piece of memory that keeps track of pages that have been fetched from swap but not yet been modified.
beatq.tech;nc-ph-2432.beatq.tech:memory.swap_cache.label swap_cache
beatq.tech;nc-ph-2432.beatq.tech:memory.page_tables.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:memory.page_tables.colour COLOUR1
beatq.tech;nc-ph-2432.beatq.tech:memory.page_tables.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:memory.page_tables.label page_tables
beatq.tech;nc-ph-2432.beatq.tech:memory.page_tables.info Memory used to map between virtual and physical memory addresses.
beatq.tech;nc-ph-2432.beatq.tech:memory.page_tables.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:memory.mapped.label mapped
beatq.tech;nc-ph-2432.beatq.tech:memory.mapped.info All mmap()ed pages.
beatq.tech;nc-ph-2432.beatq.tech:memory.mapped.draw LINE2
beatq.tech;nc-ph-2432.beatq.tech:memory.mapped.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:memory.mapped.colour COLOUR11
beatq.tech;nc-ph-2432.beatq.tech:memory.mapped.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:memory.slab.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:memory.slab.colour COLOUR3
beatq.tech;nc-ph-2432.beatq.tech:memory.slab.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:memory.slab.label slab_cache
beatq.tech;nc-ph-2432.beatq.tech:memory.slab.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:memory.slab.info Memory used by the kernel (major users are caches like inode, dentry, etc).
beatq.tech;nc-ph-2432.beatq.tech:memory.vmalloc_used.label vmalloc_used
beatq.tech;nc-ph-2432.beatq.tech:memory.vmalloc_used.info 'VMalloc' (kernel) memory used
beatq.tech;nc-ph-2432.beatq.tech:memory.vmalloc_used.draw LINE2
beatq.tech;nc-ph-2432.beatq.tech:memory.vmalloc_used.colour COLOUR8
beatq.tech;nc-ph-2432.beatq.tech:memory.vmalloc_used.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:memory.vmalloc_used.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:load.graph_title Load average
beatq.tech;nc-ph-2432.beatq.tech:load.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:load.graph_vlabel load
beatq.tech;nc-ph-2432.beatq.tech:load.graph_scale no
beatq.tech;nc-ph-2432.beatq.tech:load.graph_category system
beatq.tech;nc-ph-2432.beatq.tech:load.graph_info The load average of the machine describes how many processes are in the run-queue (scheduled to run "immediately").
beatq.tech;nc-ph-2432.beatq.tech:load.graph_order load
beatq.tech;nc-ph-2432.beatq.tech:load.load.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:load.load.label load
beatq.tech;nc-ph-2432.beatq.tech:load.load.info 5 minute load average
beatq.tech;nc-ph-2432.beatq.tech:load.load.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:netstat_established.graph_title Netstat, established only
beatq.tech;nc-ph-2432.beatq.tech:netstat_established.graph_args --lower-limit 0
beatq.tech;nc-ph-2432.beatq.tech:netstat_established.graph_vlabel TCP connections
beatq.tech;nc-ph-2432.beatq.tech:netstat_established.graph_category network
beatq.tech;nc-ph-2432.beatq.tech:netstat_established.graph_period second
beatq.tech;nc-ph-2432.beatq.tech:netstat_established.graph_info This graph shows the TCP activity of all the network interfaces combined.
beatq.tech;nc-ph-2432.beatq.tech:netstat_established.graph_order established
beatq.tech;nc-ph-2432.beatq.tech:netstat_established.established.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:netstat_established.established.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:netstat_established.established.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:netstat_established.established.info The number of currently open connections.
beatq.tech;nc-ph-2432.beatq.tech:netstat_established.established.label established
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.graph_title IOs for /dev/md0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.graph_args --base 1000
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.graph_vlabel Units read (-) / write (+)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.graph_info This graph shows the number of IO operations pr second and the average size of these requests. Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph). Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3. This is because the base for this graph is 1000 not 1024.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.graph_order rdio wrio avgrdrqsz avgwrrqsz
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.wrio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.wrio.label IO/sec
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.wrio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.wrio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.wrio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.wrio.negative rdio
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.wrio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.avgrdrqsz.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.avgrdrqsz.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.avgrdrqsz.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.avgrdrqsz.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.avgrdrqsz.label dummy
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.avgrdrqsz.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.avgrdrqsz.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.avgwrrqsz.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.avgwrrqsz.label Req Size (KB)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.avgwrrqsz.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.avgwrrqsz.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.avgwrrqsz.negative avgrdrqsz
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.avgwrrqsz.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.avgwrrqsz.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.rdio.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.rdio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.rdio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.rdio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.rdio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.rdio.label dummy
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0.rdio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:acpi.graph_title ACPI Thermal zone temperatures
beatq.tech;nc-ph-2432.beatq.tech:acpi.graph_vlabel Celsius
beatq.tech;nc-ph-2432.beatq.tech:acpi.graph_category sensors
beatq.tech;nc-ph-2432.beatq.tech:acpi.graph_info This graph shows the temperature in different ACPI Thermal zones. If there is only one it will usually be the case temperature.
beatq.tech;nc-ph-2432.beatq.tech:acpi.graph_order thermal_zone0 thermal_zone1 thermal_zone2
beatq.tech;nc-ph-2432.beatq.tech:acpi.thermal_zone0.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:acpi.thermal_zone0.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:acpi.thermal_zone0.label acpitz
beatq.tech;nc-ph-2432.beatq.tech:acpi.thermal_zone1.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:acpi.thermal_zone1.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:acpi.thermal_zone1.label pch_cannonlake
beatq.tech;nc-ph-2432.beatq.tech:acpi.thermal_zone2.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:acpi.thermal_zone2.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:acpi.thermal_zone2.label x86_pkg_temp
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1.graph_title Disk utilization for /dev/md1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1.graph_vlabel % busy
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1.graph_scale no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1.graph_order util
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1.util.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1.util.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1.util.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1.util.label Utilization
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1.util.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1.util.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:interrupts.graph_title Interrupts and context switches
beatq.tech;nc-ph-2432.beatq.tech:interrupts.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:interrupts.graph_vlabel interrupts & ctx switches / ${graph_period}
beatq.tech;nc-ph-2432.beatq.tech:interrupts.graph_category system
beatq.tech;nc-ph-2432.beatq.tech:interrupts.graph_info This graph shows the number of interrupts and context switches on the system. These are typically high on a busy system.
beatq.tech;nc-ph-2432.beatq.tech:interrupts.graph_order intr ctx
beatq.tech;nc-ph-2432.beatq.tech:interrupts.intr.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:interrupts.intr.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:interrupts.intr.min 0
beatq.tech;nc-ph-2432.beatq.tech:interrupts.intr.info Interrupts are events that alter sequence of instructions executed by a processor. They can come from either hardware (exceptions, NMI, IRQ) or software.
beatq.tech;nc-ph-2432.beatq.tech:interrupts.intr.label interrupts
beatq.tech;nc-ph-2432.beatq.tech:interrupts.intr.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:interrupts.ctx.min 0
beatq.tech;nc-ph-2432.beatq.tech:interrupts.ctx.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:interrupts.ctx.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:interrupts.ctx.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:interrupts.ctx.info A context switch occurs when a multitasking operatings system suspends the currently running process, and starts executing another.
beatq.tech;nc-ph-2432.beatq.tech:interrupts.ctx.label context switches
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.graph_title MySQL queries
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.graph_args --base 1000
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.graph_vlabel queries / ${graph_period}
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.graph_category mysql
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.graph_info Note that this is a old plugin which is no longer installed by default. It is retained for compatability with old installations.
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.graph_total total
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.graph_order select delete insert update replace cache_hits
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.delete.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.delete.min 0
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.delete.max 500000
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.delete.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.delete.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.delete.label delete
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.delete.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.replace.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.replace.label replace
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.replace.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.replace.max 500000
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.replace.min 0
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.replace.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.replace.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.insert.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.insert.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.insert.label insert
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.insert.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.insert.max 500000
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.insert.min 0
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.insert.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.select.draw AREA
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.select.label select
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.select.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.select.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.select.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.select.min 0
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.select.max 500000
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.update.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.update.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.update.max 500000
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.update.min 0
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.update.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.update.label update
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.update.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.cache_hits.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.cache_hits.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.cache_hits.max 500000
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.cache_hits.min 0
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.cache_hits.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.cache_hits.label cache_hits
beatq.tech;nc-ph-2432.beatq.tech:mysql_queries.cache_hits.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:http_loadtime.graph_title HTTP loadtime of a page
beatq.tech;nc-ph-2432.beatq.tech:http_loadtime.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:http_loadtime.graph_vlabel Load time in seconds
beatq.tech;nc-ph-2432.beatq.tech:http_loadtime.graph_category network
beatq.tech;nc-ph-2432.beatq.tech:http_loadtime.graph_info This graph shows the load time in seconds
beatq.tech;nc-ph-2432.beatq.tech:http_loadtime.graph_order http___localhost_
beatq.tech;nc-ph-2432.beatq.tech:http_loadtime.http___localhost_.label http://localhost/
beatq.tech;nc-ph-2432.beatq.tech:http_loadtime.http___localhost_.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:http_loadtime.http___localhost_.info page load time
beatq.tech;nc-ph-2432.beatq.tech:http_loadtime.http___localhost_.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:entropy.graph_title Available entropy
beatq.tech;nc-ph-2432.beatq.tech:entropy.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:entropy.graph_vlabel entropy (bytes)
beatq.tech;nc-ph-2432.beatq.tech:entropy.graph_scale no
beatq.tech;nc-ph-2432.beatq.tech:entropy.graph_category system
beatq.tech;nc-ph-2432.beatq.tech:entropy.graph_info This graph shows the amount of entropy available in the system.
beatq.tech;nc-ph-2432.beatq.tech:entropy.graph_order entropy
beatq.tech;nc-ph-2432.beatq.tech:entropy.entropy.label entropy
beatq.tech;nc-ph-2432.beatq.tech:entropy.entropy.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:entropy.entropy.info The number of random bytes available. This is typically used by cryptographic applications.
beatq.tech;nc-ph-2432.beatq.tech:entropy.entropy.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0.graph_title Disk utilization for /dev/md0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0.graph_vlabel % busy
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0.graph_scale no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0.graph_order util
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0.util.label Utilization
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0.util.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0.util.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0.util.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0.util.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0.util.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:open_inodes.graph_title Inode table usage
beatq.tech;nc-ph-2432.beatq.tech:open_inodes.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:open_inodes.graph_vlabel number of open inodes
beatq.tech;nc-ph-2432.beatq.tech:open_inodes.graph_category system
beatq.tech;nc-ph-2432.beatq.tech:open_inodes.graph_info This graph monitors the Linux open inode table.
beatq.tech;nc-ph-2432.beatq.tech:open_inodes.graph_order used max
beatq.tech;nc-ph-2432.beatq.tech:open_inodes.max.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:open_inodes.max.label inode table size
beatq.tech;nc-ph-2432.beatq.tech:open_inodes.max.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:open_inodes.max.info The size of the system inode table. This is dynamically adjusted by the kernel.
beatq.tech;nc-ph-2432.beatq.tech:open_inodes.used.info The number of currently open inodes.
beatq.tech;nc-ph-2432.beatq.tech:open_inodes.used.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:open_inodes.used.label open inodes
beatq.tech;nc-ph-2432.beatq.tech:open_inodes.used.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:mysql_slowqueries.graph_title MySQL slow queries
beatq.tech;nc-ph-2432.beatq.tech:mysql_slowqueries.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:mysql_slowqueries.graph_vlabel slow queries / ${graph_period}
beatq.tech;nc-ph-2432.beatq.tech:mysql_slowqueries.graph_category mysql
beatq.tech;nc-ph-2432.beatq.tech:mysql_slowqueries.graph_info Note that this is a old plugin which is no longer installed by default. It is retained for compatability with old installations.
beatq.tech;nc-ph-2432.beatq.tech:mysql_slowqueries.graph_order queries
beatq.tech;nc-ph-2432.beatq.tech:mysql_slowqueries.queries.label slow queries
beatq.tech;nc-ph-2432.beatq.tech:mysql_slowqueries.queries.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:mysql_slowqueries.queries.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:mysql_slowqueries.queries.max 500000
beatq.tech;nc-ph-2432.beatq.tech:mysql_slowqueries.queries.min 0
beatq.tech;nc-ph-2432.beatq.tech:mysql_slowqueries.queries.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.graph_title Individual interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.graph_args --base 1000 --logarithmic
beatq.tech;nc-ph-2432.beatq.tech:irqstats.graph_vlabel interrupts / ${graph_period}
beatq.tech;nc-ph-2432.beatq.tech:irqstats.graph_category system
beatq.tech;nc-ph-2432.beatq.tech:irqstats.graph_info Shows the number of different IRQs received by the kernel. High disk or network traffic can cause a high number of interrupts (with good hardware and drivers this will be less so). Sudden high interrupt activity with no associated higher system activity is not normal.
beatq.tech;nc-ph-2432.beatq.tech:irqstats.graph_order i8 i9 i14 i16 i17 i19 i20 i120 i121 i122 i123 i124 i125 i126 i136 i137 i138 i139 i140 i141 i142 i143 i144 iNMI iLOC iSPU iPMI iIWI iRTR iRES iCAL iTLB iTRM iTHR iDFR iMCE iMCP iHYP iHRE iHVS iERR iMIS iPIN iNPI iPIW i8 i9 i14 i16 i17 i19 i20 i120 i121 i122 i123 i124 i125 i126 i136 i137 i138 i139 i140 i141 i142 i143 i144 iNMI iLOC iSPU iPMI iIWI iRTR iRES iCAL iTLB iTRM iTHR iDFR iMCE iMCP iHYP iHRE iHVS iERR iMIS iPIN iNPI iPIW
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i143.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i143.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i143.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i143.info Interrupt 143, for device(s): 526343-edge eth1-TxRx-6
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i143.label 526343-edge eth1-TxRx-6
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i143.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i141.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i141.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i141.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i141.label 526341-edge eth1-TxRx-4
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i141.info Interrupt 141, for device(s): 526341-edge eth1-TxRx-4
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i141.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i140.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i140.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i140.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i140.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i140.label 526340-edge eth1-TxRx-3
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i140.info Interrupt 140, for device(s): 526340-edge eth1-TxRx-3
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i124.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i124.info Interrupt 124, for device(s): 327680-edge xhci_hcd
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i124.label 327680-edge xhci_hcd
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i124.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i124.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i124.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iSPU.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iSPU.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iSPU.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iSPU.label Spurious interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iSPU.info Interrupt SPU, for device(s): Spurious interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iSPU.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMCE.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMCE.info Interrupt MCE, for device(s): Machine check exceptions
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMCE.label Machine check exceptions
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMCE.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMCE.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMCE.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i20.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i20.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i20.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i20.info Interrupt 20, for device(s): 20-fasteoi idma64.2
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i20.label 20-fasteoi idma64.2
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i20.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i120.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i120.label 0-edge dmar0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i120.info Interrupt 120, for device(s): 0-edge dmar0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i120.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i120.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i120.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHVS.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHVS.info Interrupt HVS, for device(s): Hyper-V stimer0 interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHVS.label Hyper-V stimer0 interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHVS.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHVS.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHVS.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i121.info Interrupt 121, for device(s): 16384-edge PCIe PME
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i121.label 16384-edge PCIe PME
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i121.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i121.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i121.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i121.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMCP.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMCP.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMCP.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMCP.label Machine check polls
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMCP.info Interrupt MCP, for device(s): Machine check polls
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMCP.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i125.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i125.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i125.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i125.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i125.info Interrupt 125, for device(s): 376832-edge ahci[0000:00:17.0]
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i125.label 376832-edge ahci[0000:00:17.0]
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iLOC.label Local timer interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iLOC.info Interrupt LOC, for device(s): Local timer interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iLOC.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iLOC.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iLOC.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iLOC.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i9.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i9.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i9.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i9.label 9-fasteoi acpi
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i9.info Interrupt 9, for device(s): 9-fasteoi acpi
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i9.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i14.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i14.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i14.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i14.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i14.label 14-fasteoi INT3450:00
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i14.info Interrupt 14, for device(s): 14-fasteoi INT3450:00
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i123.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i123.label 450560-edge PCIe PME
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i123.info Interrupt 123, for device(s): 450560-edge PCIe PME
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i123.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i123.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i123.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTHR.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTHR.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTHR.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTHR.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTHR.info Interrupt THR, for device(s): Threshold APIC interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTHR.label Threshold APIC interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPMI.info Interrupt PMI, for device(s): Performance monitoring interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPMI.label Performance monitoring interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPMI.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPMI.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPMI.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPMI.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i144.label 526344-edge eth1-TxRx-7
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i144.info Interrupt 144, for device(s): 526344-edge eth1-TxRx-7
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i144.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i144.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i144.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i144.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iRES.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iRES.label Rescheduling interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iRES.info Interrupt RES, for device(s): Rescheduling interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iRES.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iRES.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iRES.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTLB.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTLB.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTLB.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTLB.label TLB shootdowns
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTLB.info Interrupt TLB, for device(s): TLB shootdowns
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTLB.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iNMI.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iNMI.label Non-maskable interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iNMI.info Interrupt NMI, for device(s): Non-maskable interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iNMI.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iNMI.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iNMI.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMIS.label MIS
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMIS.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMIS.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMIS.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iMIS.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i17.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i17.label 17-fasteoi idma64.1, i2c_designware.1
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i17.info Interrupt 17, for device(s): 17-fasteoi idma64.1, i2c_designware.1
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i17.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i17.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i17.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i139.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i139.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i139.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i139.label 526339-edge eth1-TxRx-2
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i139.info Interrupt 139, for device(s): 526339-edge eth1-TxRx-2
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i139.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHYP.label Hypervisor callback interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHYP.info Interrupt HYP, for device(s): Hypervisor callback interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHYP.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHYP.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHYP.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHYP.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPIW.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPIW.label Posted-interrupt wakeup event
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPIW.info Interrupt PIW, for device(s): Posted-interrupt wakeup event
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPIW.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPIW.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPIW.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iDFR.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iDFR.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iDFR.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iDFR.label Deferred Error APIC interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iDFR.info Interrupt DFR, for device(s): Deferred Error APIC interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iDFR.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i8.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i8.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i8.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i8.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i8.info Interrupt 8, for device(s): 8-edge rtc0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i8.label 8-edge rtc0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i126.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i126.info Interrupt 126, for device(s): 0000:00:14.5 cd [166]
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i126.label 0000:00:14.5 cd [166]
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i126.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i126.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i126.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i142.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i142.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i142.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i142.info Interrupt 142, for device(s): 526342-edge eth1-TxRx-5
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i142.label 526342-edge eth1-TxRx-5
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i142.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iIWI.info Interrupt IWI, for device(s): IRQ work interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iIWI.label IRQ work interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iIWI.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iIWI.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iIWI.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iIWI.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iERR.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iERR.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iERR.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iERR.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iERR.label ERR
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i138.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i138.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i138.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i138.label 526338-edge eth1-TxRx-1
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i138.info Interrupt 138, for device(s): 526338-edge eth1-TxRx-1
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i138.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iRTR.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iRTR.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iRTR.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iRTR.info Interrupt RTR, for device(s): APIC ICR read retries
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iRTR.label APIC ICR read retries
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iRTR.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i136.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i136.label 526336-edge eth1
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i136.info Interrupt 136, for device(s): 526336-edge eth1
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i136.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i136.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i136.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPIN.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPIN.info Interrupt PIN, for device(s): Posted-interrupt notification event
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPIN.label Posted-interrupt notification event
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPIN.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPIN.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iPIN.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i16.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i16.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i16.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i16.info Interrupt 16, for device(s): 16-fasteoi i801_smbus, idma64.0, i2c_designware.0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i16.label 16
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i16.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i19.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i19.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i19.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i19.label 19-fasteoi mmc0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i19.info Interrupt 19, for device(s): 19-fasteoi mmc0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i19.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i137.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i137.label 526337-edge eth1-TxRx-0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i137.info Interrupt 137, for device(s): 526337-edge eth1-TxRx-0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i137.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i137.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i137.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTRM.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTRM.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTRM.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTRM.info Interrupt TRM, for device(s): Thermal event interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTRM.label Thermal event interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iTRM.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iNPI.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iNPI.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iNPI.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iNPI.label Nested posted-interrupt event
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iNPI.info Interrupt NPI, for device(s): Nested posted-interrupt event
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iNPI.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i122.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i122.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i122.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i122.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i122.info Interrupt 122, for device(s): 442368-edge PCIe PME
beatq.tech;nc-ph-2432.beatq.tech:irqstats.i122.label 442368-edge PCIe PME
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iCAL.info Interrupt CAL, for device(s): Function call interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iCAL.label Function call interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iCAL.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iCAL.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iCAL.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iCAL.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHRE.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHRE.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHRE.min 0
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHRE.info Interrupt HRE, for device(s): Hyper-V reenlightenment interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHRE.label Hyper-V reenlightenment interrupts
beatq.tech;nc-ph-2432.beatq.tech:irqstats.iHRE.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.graph_title Average latency for /dev/md1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.graph_args --base 1000 --logarithmic
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.graph_vlabel seconds
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.graph_info This graph shows average waiting time/latency for different categories of disk operations. The times that include the queue times indicate how busy your system is. If the waiting time hits 1 second then your I/O system is 100% busy.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.graph_order svctm avgwait avgrdwait avgwrwait
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgwait.label IO Wait time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.svctm.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.svctm.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.svctm.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.svctm.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.svctm.label Device IO time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.svctm.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgrdwait.warning 0:3
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgrdwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgrdwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgrdwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgrdwait.label Read IO Wait time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgrdwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgrdwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgwrwait.warning 0:3
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgwrwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgwrwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgwrwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgwrwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgwrwait.label Write IO Wait time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1.avgwrwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.graph_title Processes priority
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.graph_order low high locked high low locked
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.graph_category processes
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.graph_info This graph shows number of processes at each priority
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.graph_vlabel Number of processes
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.low.label low priority
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.low.draw AREA
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.low.info The number of low-priority processes (tasks)
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.low.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.low.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.high.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.high.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.high.info The number of high-priority processes (tasks)
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.high.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.high.label high priority
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.locked.label locked in memory
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.locked.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.locked.info The number of processes that have pages locked into memory (for real-time and custom IO)
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.locked.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:proc_pri.locked.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df.graph_title Disk usage in percent
beatq.tech;nc-ph-2432.beatq.tech:df.graph_args --upper-limit 100 -l 0
beatq.tech;nc-ph-2432.beatq.tech:df.graph_vlabel %
beatq.tech;nc-ph-2432.beatq.tech:df.graph_scale no
beatq.tech;nc-ph-2432.beatq.tech:df.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:df.graph_order _dev_shm _run _sys_fs_cgroup _dev_md1 _dev_md0 _dev_loop0 _run_user_0 _run_user_1030 _run_user_1027 _run_user_978 _run_user_1043
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_1043.label /run/user/1043
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_1043.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_1043.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_1043.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_1043.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_1027.label /run/user/1027
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_1027.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_1027.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_1027.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_1027.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df._dev_md1.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df._dev_md1.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df._dev_md1.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df._dev_md1.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df._dev_md1.label /
beatq.tech;nc-ph-2432.beatq.tech:df._sys_fs_cgroup.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df._sys_fs_cgroup.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df._sys_fs_cgroup.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df._sys_fs_cgroup.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df._sys_fs_cgroup.label /sys/fs/cgroup
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_1030.label /run/user/1030
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_1030.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_1030.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_1030.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_1030.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_0.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_0.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_0.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_0.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_0.label /run/user/0
beatq.tech;nc-ph-2432.beatq.tech:df._run.label /run
beatq.tech;nc-ph-2432.beatq.tech:df._run.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df._run.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df._run.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df._run.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df._dev_loop0.label /tmp
beatq.tech;nc-ph-2432.beatq.tech:df._dev_loop0.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df._dev_loop0.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df._dev_loop0.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df._dev_loop0.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df._dev_shm.label /dev/shm
beatq.tech;nc-ph-2432.beatq.tech:df._dev_shm.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df._dev_shm.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df._dev_shm.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df._dev_shm.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_978.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_978.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_978.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_978.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df._run_user_978.label /run/user/978
beatq.tech;nc-ph-2432.beatq.tech:df._dev_md0.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df._dev_md0.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df._dev_md0.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df._dev_md0.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df._dev_md0.label /boot
beatq.tech;nc-ph-2432.beatq.tech:df_inode.graph_title Inode usage in percent
beatq.tech;nc-ph-2432.beatq.tech:df_inode.graph_args --upper-limit 100 -l 0
beatq.tech;nc-ph-2432.beatq.tech:df_inode.graph_vlabel %
beatq.tech;nc-ph-2432.beatq.tech:df_inode.graph_scale no
beatq.tech;nc-ph-2432.beatq.tech:df_inode.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:df_inode.graph_order devtmpfs _dev_shm _run _sys_fs_cgroup _dev_md1 _dev_md0 _dev_loop0 _run_user_0 _run_user_1030 _run_user_1027 _run_user_978 _run_user_1043
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_1043.label /run/user/1043
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_1043.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_1043.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_1043.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_1043.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df_inode.devtmpfs.label /dev
beatq.tech;nc-ph-2432.beatq.tech:df_inode.devtmpfs.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df_inode.devtmpfs.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df_inode.devtmpfs.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df_inode.devtmpfs.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_1030.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_1030.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_1030.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_1030.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_1030.label /run/user/1030
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_1027.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_1027.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_1027.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_1027.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_1027.label /run/user/1027
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_md1.label /
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_md1.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_md1.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_md1.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_md1.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df_inode._sys_fs_cgroup.label /sys/fs/cgroup
beatq.tech;nc-ph-2432.beatq.tech:df_inode._sys_fs_cgroup.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df_inode._sys_fs_cgroup.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df_inode._sys_fs_cgroup.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df_inode._sys_fs_cgroup.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run.label /run
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_0.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_0.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_0.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_0.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_0.label /run/user/0
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_978.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_978.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_978.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_978.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df_inode._run_user_978.label /run/user/978
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_md0.label /boot
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_md0.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_md0.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_md0.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_md0.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_shm.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_shm.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_shm.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_shm.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_shm.label /dev/shm
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_loop0.label /tmp
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_loop0.warning 92
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_loop0.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_loop0.critical 98
beatq.tech;nc-ph-2432.beatq.tech:df_inode._dev_loop0.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.graph_order down up down up
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.graph_title eth0 traffic
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.graph_args --base 1000
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.graph_vlabel bits in (-) / out (+) per ${graph_period}
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.graph_category network
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.graph_info This graph shows the traffic of the eth0 network interface. Please note that the traffic is shown in bits per second, not bytes. IMPORTANT: On 32-bit systems the data source for this plugin uses 32-bit counters, which makes the plugin unreliable and unsuitable for most 100-Mb/s (or faster) interfaces, where traffic is expected to exceed 50 Mb/s over a 5 minute period. This means that this plugin is unsuitable for most 32-bit production environments. To avoid this problem, use the ip_ plugin instead. There should be no problems on 64-bit systems running 64-bit kernels.
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.down.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.down.label received
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.down.cdef down,8,*
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.down.min 0
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.down.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.down.graph no
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.down.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.up.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.up.cdef up,8,*
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.up.min 0
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.up.negative down
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.up.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.up.label bps
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.up.info Traffic of the eth0 interface. Unable to determine interface speed. Please run the plugin as root.
beatq.tech;nc-ph-2432.beatq.tech:if_eth0.up.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.graph_title IOs for /dev/md1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.graph_args --base 1000
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.graph_vlabel Units read (-) / write (+)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.graph_info This graph shows the number of IO operations pr second and the average size of these requests. Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph). Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3. This is because the base for this graph is 1000 not 1024.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.graph_order rdio wrio avgrdrqsz avgwrrqsz
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.wrio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.wrio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.wrio.label IO/sec
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.wrio.negative rdio
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.wrio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.wrio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.wrio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.avgrdrqsz.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.avgrdrqsz.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.avgrdrqsz.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.avgrdrqsz.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.avgrdrqsz.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.avgrdrqsz.label dummy
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.avgrdrqsz.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.avgwrrqsz.label Req Size (KB)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.avgwrrqsz.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.avgwrrqsz.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.avgwrrqsz.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.avgwrrqsz.negative avgrdrqsz
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.avgwrrqsz.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.avgwrrqsz.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.rdio.label dummy
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.rdio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.rdio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.rdio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.rdio.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.rdio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1.rdio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:apache_accesses.graph_title Apache accesses
beatq.tech;nc-ph-2432.beatq.tech:apache_accesses.graph_args --base 1000
beatq.tech;nc-ph-2432.beatq.tech:apache_accesses.graph_vlabel accesses / ${graph_period}
beatq.tech;nc-ph-2432.beatq.tech:apache_accesses.graph_category apache
beatq.tech;nc-ph-2432.beatq.tech:apache_accesses.graph_order accesses81
beatq.tech;nc-ph-2432.beatq.tech:apache_accesses.accesses81.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:apache_accesses.accesses81.info The number of accesses (pages and other items served) globally on the Apache server
beatq.tech;nc-ph-2432.beatq.tech:apache_accesses.accesses81.label port 81
beatq.tech;nc-ph-2432.beatq.tech:apache_accesses.accesses81.min 0
beatq.tech;nc-ph-2432.beatq.tech:apache_accesses.accesses81.max 1000000
beatq.tech;nc-ph-2432.beatq.tech:apache_accesses.accesses81.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:apache_accesses.accesses81.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.graph_title Munin processing time
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.graph_info This graph shows the run time of the four different processes making up a munin-master run. Munin-master is run from cron every 5 minutes and we want each of the programmes in munin-master to complete before the next instance starts. Especially munin-update and munin-graph are time consuming and their run time bears watching. If munin-update uses too long time to run please see the munin-update graph to determine which host is slowing it down. If munin-graph is running too slow you need to get clever (email the munin-users mailing list) unless you can buy a faster computer with better disks to run munin on.
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.graph_scale yes
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.graph_vlabel seconds
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.graph_category munin
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.graph_order update graph html limits
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.html.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.html.draw AREASTACK
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.html.label munin html
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.html.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.graph.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.graph.warning 240
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.graph.critical 285
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.graph.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.graph.draw AREASTACK
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.graph.label munin graph
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.limits.label munin limits
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.limits.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.limits.draw AREASTACK
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.limits.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.update.draw AREASTACK
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.update.label munin update
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.update.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.update.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.update.warning 240
beatq.tech;nc-ph-2432.beatq.tech:munin_stats.update.critical 285
beatq.tech;nc-ph-2432.beatq.tech:cpu.graph_title CPU usage
beatq.tech;nc-ph-2432.beatq.tech:cpu.graph_order system user nice idle iowait irq softirq system user nice idle iowait irq softirq steal guest
beatq.tech;nc-ph-2432.beatq.tech:cpu.graph_args --base 1000 -r --lower-limit 0 --upper-limit 1200
beatq.tech;nc-ph-2432.beatq.tech:cpu.graph_vlabel %
beatq.tech;nc-ph-2432.beatq.tech:cpu.graph_scale no
beatq.tech;nc-ph-2432.beatq.tech:cpu.graph_info This graph shows how CPU time is spent.
beatq.tech;nc-ph-2432.beatq.tech:cpu.graph_category system
beatq.tech;nc-ph-2432.beatq.tech:cpu.graph_period second
beatq.tech;nc-ph-2432.beatq.tech:cpu.idle.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpu.idle.min 0
beatq.tech;nc-ph-2432.beatq.tech:cpu.idle.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpu.idle.info Idle CPU time
beatq.tech;nc-ph-2432.beatq.tech:cpu.idle.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:cpu.idle.label idle
beatq.tech;nc-ph-2432.beatq.tech:cpu.idle.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:cpu.softirq.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:cpu.softirq.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:cpu.softirq.info CPU time spent handling "batched" interrupts
beatq.tech;nc-ph-2432.beatq.tech:cpu.softirq.label softirq
beatq.tech;nc-ph-2432.beatq.tech:cpu.softirq.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpu.softirq.min 0
beatq.tech;nc-ph-2432.beatq.tech:cpu.softirq.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpu.steal.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:cpu.steal.info The time that a virtual CPU had runnable tasks, but the virtual CPU itself was not running
beatq.tech;nc-ph-2432.beatq.tech:cpu.steal.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:cpu.steal.label steal
beatq.tech;nc-ph-2432.beatq.tech:cpu.steal.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpu.steal.min 0
beatq.tech;nc-ph-2432.beatq.tech:cpu.steal.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpu.irq.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:cpu.irq.info CPU time spent handling interrupts
beatq.tech;nc-ph-2432.beatq.tech:cpu.irq.label irq
beatq.tech;nc-ph-2432.beatq.tech:cpu.irq.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:cpu.irq.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpu.irq.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpu.irq.min 0
beatq.tech;nc-ph-2432.beatq.tech:cpu.guest.min 0
beatq.tech;nc-ph-2432.beatq.tech:cpu.guest.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpu.guest.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpu.guest.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:cpu.guest.label guest
beatq.tech;nc-ph-2432.beatq.tech:cpu.guest.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:cpu.guest.info The time spent running a virtual CPU for guest operating systems under the control of the Linux kernel.
beatq.tech;nc-ph-2432.beatq.tech:cpu.system.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:cpu.system.label system
beatq.tech;nc-ph-2432.beatq.tech:cpu.system.draw AREA
beatq.tech;nc-ph-2432.beatq.tech:cpu.system.info CPU time spent by the kernel in system activities
beatq.tech;nc-ph-2432.beatq.tech:cpu.system.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpu.system.min 0
beatq.tech;nc-ph-2432.beatq.tech:cpu.system.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpu.iowait.info CPU time spent waiting for I/O operations to finish when there is nothing else to do.
beatq.tech;nc-ph-2432.beatq.tech:cpu.iowait.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:cpu.iowait.label iowait
beatq.tech;nc-ph-2432.beatq.tech:cpu.iowait.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:cpu.iowait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpu.iowait.min 0
beatq.tech;nc-ph-2432.beatq.tech:cpu.iowait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpu.user.label user
beatq.tech;nc-ph-2432.beatq.tech:cpu.user.info CPU time spent by normal programs and daemons
beatq.tech;nc-ph-2432.beatq.tech:cpu.user.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:cpu.user.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:cpu.user.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpu.user.min 0
beatq.tech;nc-ph-2432.beatq.tech:cpu.user.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpu.nice.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpu.nice.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpu.nice.min 0
beatq.tech;nc-ph-2432.beatq.tech:cpu.nice.info CPU time spent by nice(1)d programs
beatq.tech;nc-ph-2432.beatq.tech:cpu.nice.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:cpu.nice.label nice
beatq.tech;nc-ph-2432.beatq.tech:cpu.nice.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.graph_title Average latency for /dev/sda
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.graph_args --base 1000 --logarithmic
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.graph_vlabel seconds
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.graph_info This graph shows average waiting time/latency for different categories of disk operations. The times that include the queue times indicate how busy your system is. If the waiting time hits 1 second then your I/O system is 100% busy.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.graph_order svctm avgwait avgrdwait avgwrwait
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgwrwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgwrwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgwrwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgwrwait.warning 0:3
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgwrwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgwrwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgwrwait.label Write IO Wait time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgrdwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgrdwait.label Read IO Wait time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgrdwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgrdwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgrdwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgrdwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgrdwait.warning 0:3
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.svctm.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.svctm.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.svctm.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.svctm.label Device IO time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.svctm.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.svctm.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgwait.label IO Wait time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda.avgwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.graph_order rcvd trans rcvd trans rxdrop txdrop collisions
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.graph_title eth1 errors
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.graph_args --base 1000
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.graph_vlabel packets in (-) / out (+) per ${graph_period}
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.graph_category network
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.graph_info This graph shows the amount of errors, packet drops, and collisions on the eth1 network interface.
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.trans.label errors
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.trans.type COUNTER
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.trans.warning 1
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.trans.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.trans.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.trans.negative rcvd
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.collisions.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.collisions.label collisions
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.collisions.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.collisions.type COUNTER
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.rxdrop.type COUNTER
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.rxdrop.label drops
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.rxdrop.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.rxdrop.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.rxdrop.graph no
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.rcvd.label errors
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.rcvd.type COUNTER
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.rcvd.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.rcvd.graph no
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.rcvd.warning 1
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.rcvd.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.txdrop.type COUNTER
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.txdrop.label drops
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.txdrop.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.txdrop.negative rxdrop
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth1.txdrop.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.graph_title Exim mail throughput
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.graph_vlabel mails/${graph_period}
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.graph_scale no
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.graph_category exim
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.graph_order received completed rejected
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.rejected.min 0
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.rejected.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.rejected.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.rejected.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.rejected.label rejected
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.completed.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.completed.min 0
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.completed.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.completed.label completed
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.completed.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.received.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.received.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.received.min 0
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.received.label received
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.received.draw AREA
beatq.tech;nc-ph-2432.beatq.tech:exim_mailstats.received.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:swap.graph_title Swap in/out
beatq.tech;nc-ph-2432.beatq.tech:swap.graph_args -l 0 --base 1000
beatq.tech;nc-ph-2432.beatq.tech:swap.graph_vlabel pages per ${graph_period} in (-) / out (+)
beatq.tech;nc-ph-2432.beatq.tech:swap.graph_category system
beatq.tech;nc-ph-2432.beatq.tech:swap.graph_order swap_in swap_out
beatq.tech;nc-ph-2432.beatq.tech:swap.swap_out.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:swap.swap_out.label swap
beatq.tech;nc-ph-2432.beatq.tech:swap.swap_out.max 100000
beatq.tech;nc-ph-2432.beatq.tech:swap.swap_out.min 0
beatq.tech;nc-ph-2432.beatq.tech:swap.swap_out.negative swap_in
beatq.tech;nc-ph-2432.beatq.tech:swap.swap_out.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:swap.swap_out.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:swap.swap_in.min 0
beatq.tech;nc-ph-2432.beatq.tech:swap.swap_in.max 100000
beatq.tech;nc-ph-2432.beatq.tech:swap.swap_in.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:swap.swap_in.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:swap.swap_in.graph no
beatq.tech;nc-ph-2432.beatq.tech:swap.swap_in.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:swap.swap_in.label swap
beatq.tech;nc-ph-2432.beatq.tech:mysql_threads.graph_title MySQL threads
beatq.tech;nc-ph-2432.beatq.tech:mysql_threads.graph_vlabel threads
beatq.tech;nc-ph-2432.beatq.tech:mysql_threads.graph_category mysql
beatq.tech;nc-ph-2432.beatq.tech:mysql_threads.graph_info Note that this is a old plugin which is no longer installed by default. It is retained for compatability with old installations.
beatq.tech;nc-ph-2432.beatq.tech:mysql_threads.graph_args --base 1000
beatq.tech;nc-ph-2432.beatq.tech:mysql_threads.graph_order threads
beatq.tech;nc-ph-2432.beatq.tech:mysql_threads.threads.label mysql threads
beatq.tech;nc-ph-2432.beatq.tech:mysql_threads.threads.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:mysql_threads.threads.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.graph_title IOs for /dev/sda
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.graph_args --base 1000
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.graph_vlabel Units read (-) / write (+)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.graph_info This graph shows the number of IO operations pr second and the average size of these requests. Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph). Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3. This is because the base for this graph is 1000 not 1024.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.graph_order rdio wrio avgrdrqsz avgwrrqsz
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.avgwrrqsz.label Req Size (KB)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.avgwrrqsz.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.avgwrrqsz.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.avgwrrqsz.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.avgwrrqsz.negative avgrdrqsz
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.avgwrrqsz.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.avgwrrqsz.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.avgrdrqsz.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.avgrdrqsz.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.avgrdrqsz.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.avgrdrqsz.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.avgrdrqsz.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.avgrdrqsz.label dummy
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.avgrdrqsz.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.wrio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.wrio.label IO/sec
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.wrio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.wrio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.wrio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.wrio.negative rdio
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.wrio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.rdio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.rdio.label dummy
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.rdio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.rdio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.rdio.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.rdio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda.rdio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.graph_title Average latency for /dev/md0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.graph_args --base 1000 --logarithmic
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.graph_vlabel seconds
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.graph_info This graph shows average waiting time/latency for different categories of disk operations. The times that include the queue times indicate how busy your system is. If the waiting time hits 1 second then your I/O system is 100% busy.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.graph_order svctm avgwait avgrdwait avgwrwait
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.svctm.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.svctm.label Device IO time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.svctm.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.svctm.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.svctm.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.svctm.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgwrwait.label Write IO Wait time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgwrwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgwrwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgwrwait.warning 0:3
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgwrwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgwrwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgwrwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgrdwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgrdwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgrdwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgrdwait.warning 0:3
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgrdwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgrdwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgrdwait.label Read IO Wait time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgwait.label IO Wait time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
beatq.tech;nc-ph-2432.beatq.tech:apache_volume.graph_title Apache volume
beatq.tech;nc-ph-2432.beatq.tech:apache_volume.graph_args --base 1000
beatq.tech;nc-ph-2432.beatq.tech:apache_volume.graph_vlabel bytes per ${graph_period}
beatq.tech;nc-ph-2432.beatq.tech:apache_volume.graph_category apache
beatq.tech;nc-ph-2432.beatq.tech:apache_volume.graph_order volume81
beatq.tech;nc-ph-2432.beatq.tech:apache_volume.volume81.label port 81
beatq.tech;nc-ph-2432.beatq.tech:apache_volume.volume81.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:apache_volume.volume81.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:apache_volume.volume81.max 1000000000
beatq.tech;nc-ph-2432.beatq.tech:apache_volume.volume81.min 0
beatq.tech;nc-ph-2432.beatq.tech:apache_volume.volume81.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:mysql_innodb.graph_title MySQL InnoDB free tablespace
beatq.tech;nc-ph-2432.beatq.tech:mysql_innodb.graph_args --base 1024
beatq.tech;nc-ph-2432.beatq.tech:mysql_innodb.graph_vlabel Bytes
beatq.tech;nc-ph-2432.beatq.tech:mysql_innodb.graph_category mysql
beatq.tech;nc-ph-2432.beatq.tech:mysql_innodb.graph_info Free bytes in the InnoDB tablespace
beatq.tech;nc-ph-2432.beatq.tech:mysql_innodb.graph_order free
beatq.tech;nc-ph-2432.beatq.tech:mysql_innodb.free.critical 1073741824:
beatq.tech;nc-ph-2432.beatq.tech:mysql_innodb.free.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:mysql_innodb.free.warning 2147483648:
beatq.tech;nc-ph-2432.beatq.tech:mysql_innodb.free.min 0
beatq.tech;nc-ph-2432.beatq.tech:mysql_innodb.free.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:mysql_innodb.free.label Bytes free
beatq.tech;nc-ph-2432.beatq.tech:mysql_innodb.free.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.graph_title Disk throughput for /dev/sda
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.graph_args --base 1024
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.graph_vlabel Pr ${graph_period} read (-) / write (+)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.graph_info This graph shows disk throughput in bytes pr ${graph_period}. The graph base is 1024 so KB is for Kibi bytes and so on.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.graph_order rdbytes wrbytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.wrbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.wrbytes.negative rdbytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.wrbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.wrbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.wrbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.wrbytes.label Bytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.wrbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.rdbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.rdbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.rdbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.rdbytes.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.rdbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.rdbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda.rdbytes.label invisible
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.graph_title Average latency for /dev/sdb
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.graph_args --base 1000 --logarithmic
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.graph_vlabel seconds
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.graph_info This graph shows average waiting time/latency for different categories of disk operations. The times that include the queue times indicate how busy your system is. If the waiting time hits 1 second then your I/O system is 100% busy.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.graph_order svctm avgwait avgrdwait avgwrwait
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgwrwait.warning 0:3
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgwrwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgwrwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgwrwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgwrwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgwrwait.label Write IO Wait time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgwrwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgrdwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgrdwait.warning 0:3
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgrdwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgrdwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgrdwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgrdwait.label Read IO Wait time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgrdwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.svctm.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.svctm.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.svctm.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.svctm.label Device IO time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.svctm.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.svctm.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgwait.label IO Wait time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb.avgwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.graph_title IOs for /dev/loop0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.graph_args --base 1000
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.graph_vlabel Units read (-) / write (+)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.graph_info This graph shows the number of IO operations pr second and the average size of these requests. Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph). Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3. This is because the base for this graph is 1000 not 1024.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.graph_order rdio wrio avgrdrqsz avgwrrqsz
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.rdio.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.rdio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.rdio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.rdio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.rdio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.rdio.label dummy
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.rdio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.avgrdrqsz.label dummy
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.avgrdrqsz.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.avgrdrqsz.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.avgrdrqsz.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.avgrdrqsz.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.avgrdrqsz.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.avgrdrqsz.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.avgwrrqsz.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.avgwrrqsz.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.avgwrrqsz.label Req Size (KB)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.avgwrrqsz.negative avgrdrqsz
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.avgwrrqsz.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.avgwrrqsz.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.avgwrrqsz.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.wrio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.wrio.negative rdio
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.wrio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.wrio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.wrio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.wrio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0.wrio.label IO/sec
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.graph_title Disk throughput for /dev/loop0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.graph_args --base 1024
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.graph_vlabel Pr ${graph_period} read (-) / write (+)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.graph_info This graph shows disk throughput in bytes pr ${graph_period}. The graph base is 1024 so KB is for Kibi bytes and so on.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.graph_order rdbytes wrbytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.rdbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.rdbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.rdbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.rdbytes.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.rdbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.rdbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.rdbytes.label invisible
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.wrbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.wrbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.wrbytes.negative rdbytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.wrbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.wrbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.wrbytes.label Bytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0.wrbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:open_files.graph_title File table usage
beatq.tech;nc-ph-2432.beatq.tech:open_files.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:open_files.graph_vlabel number of open files
beatq.tech;nc-ph-2432.beatq.tech:open_files.graph_category system
beatq.tech;nc-ph-2432.beatq.tech:open_files.graph_info This graph monitors the Linux open files table.
beatq.tech;nc-ph-2432.beatq.tech:open_files.graph_order used
beatq.tech;nc-ph-2432.beatq.tech:open_files.used.label open files
beatq.tech;nc-ph-2432.beatq.tech:open_files.used.info The number of currently open files.
beatq.tech;nc-ph-2432.beatq.tech:open_files.used.critical 6380130
beatq.tech;nc-ph-2432.beatq.tech:open_files.used.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:open_files.used.warning 5989510
beatq.tech;nc-ph-2432.beatq.tech:open_files.used.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:netstat.graph_title Netstat, combined
beatq.tech;nc-ph-2432.beatq.tech:netstat.graph_args --units=si -l 1 --base 1000 --logarithmic
beatq.tech;nc-ph-2432.beatq.tech:netstat.graph_vlabel TCP connections
beatq.tech;nc-ph-2432.beatq.tech:netstat.graph_category network
beatq.tech;nc-ph-2432.beatq.tech:netstat.graph_period second
beatq.tech;nc-ph-2432.beatq.tech:netstat.graph_info This graph shows the TCP activity of all the network interfaces combined.
beatq.tech;nc-ph-2432.beatq.tech:netstat.graph_order active passive failed resets established
beatq.tech;nc-ph-2432.beatq.tech:netstat.active.info The number of active TCP openings per second.
beatq.tech;nc-ph-2432.beatq.tech:netstat.active.label active
beatq.tech;nc-ph-2432.beatq.tech:netstat.active.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:netstat.active.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:netstat.active.min 0
beatq.tech;nc-ph-2432.beatq.tech:netstat.active.max 50000
beatq.tech;nc-ph-2432.beatq.tech:netstat.active.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:netstat.resets.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:netstat.resets.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:netstat.resets.max 50000
beatq.tech;nc-ph-2432.beatq.tech:netstat.resets.min 0
beatq.tech;nc-ph-2432.beatq.tech:netstat.resets.info The number of TCP connection resets.
beatq.tech;nc-ph-2432.beatq.tech:netstat.resets.label resets
beatq.tech;nc-ph-2432.beatq.tech:netstat.resets.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:netstat.passive.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:netstat.passive.label passive
beatq.tech;nc-ph-2432.beatq.tech:netstat.passive.info The number of passive TCP openings per second.
beatq.tech;nc-ph-2432.beatq.tech:netstat.passive.max 50000
beatq.tech;nc-ph-2432.beatq.tech:netstat.passive.min 0
beatq.tech;nc-ph-2432.beatq.tech:netstat.passive.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:netstat.passive.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:netstat.failed.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:netstat.failed.info The number of failed TCP connection attempts per second.
beatq.tech;nc-ph-2432.beatq.tech:netstat.failed.label failed
beatq.tech;nc-ph-2432.beatq.tech:netstat.failed.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:netstat.failed.max 50000
beatq.tech;nc-ph-2432.beatq.tech:netstat.failed.min 0
beatq.tech;nc-ph-2432.beatq.tech:netstat.failed.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:netstat.established.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:netstat.established.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:netstat.established.info The number of currently open connections.
beatq.tech;nc-ph-2432.beatq.tech:netstat.established.label established
beatq.tech;nc-ph-2432.beatq.tech:netstat.established.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.graph_order rcvd trans rcvd trans rxdrop txdrop collisions
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.graph_title eth0 errors
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.graph_args --base 1000
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.graph_vlabel packets in (-) / out (+) per ${graph_period}
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.graph_category network
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.graph_info This graph shows the amount of errors, packet drops, and collisions on the eth0 network interface.
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.collisions.label collisions
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.collisions.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.collisions.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.collisions.type COUNTER
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.txdrop.type COUNTER
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.txdrop.label drops
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.txdrop.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.txdrop.negative rxdrop
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.txdrop.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.rxdrop.graph no
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.rxdrop.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.rxdrop.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.rxdrop.label drops
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.rxdrop.type COUNTER
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.rcvd.graph no
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.rcvd.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.rcvd.warning 1
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.rcvd.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.rcvd.label errors
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.rcvd.type COUNTER
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.trans.type COUNTER
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.trans.label errors
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.trans.negative rcvd
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.trans.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.trans.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:if_err_eth0.trans.warning 1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.graph_title Utilization per device
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.graph_vlabel % busy
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.graph_width 400
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.graph_scale no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.graph_order loop0_util md0_util md1_util sda_util sdb_util
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0_util.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0_util.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0_util.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0_util.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0_util.label md0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0_util.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md0_util.info Utilization of the device
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda_util.info Utilization of the device
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda_util.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda_util.label sda
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda_util.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda_util.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda_util.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda_util.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb_util.label sdb
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb_util.info Utilization of the device
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb_util.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb_util.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb_util.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb_util.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb_util.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0_util.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0_util.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0_util.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0_util.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0_util.info Utilization of the device
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0_util.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.loop0_util.label loop0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1_util.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1_util.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1_util.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1_util.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1_util.label md1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1_util.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.md1_util.info Utilization of the device
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.graph_title IOs for /dev/sdb
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.graph_args --base 1000
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.graph_vlabel Units read (-) / write (+)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.graph_info This graph shows the number of IO operations pr second and the average size of these requests. Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph). Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3. This is because the base for this graph is 1000 not 1024.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.graph_order rdio wrio avgrdrqsz avgwrrqsz
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.rdio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.rdio.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.rdio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.rdio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.rdio.label dummy
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.rdio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.rdio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.wrio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.wrio.label IO/sec
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.wrio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.wrio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.wrio.negative rdio
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.wrio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.wrio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.avgrdrqsz.label dummy
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.avgrdrqsz.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.avgrdrqsz.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.avgrdrqsz.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.avgrdrqsz.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.avgrdrqsz.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.avgrdrqsz.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.avgwrrqsz.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.avgwrrqsz.negative avgrdrqsz
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.avgwrrqsz.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.avgwrrqsz.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.avgwrrqsz.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.avgwrrqsz.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb.avgwrrqsz.label Req Size (KB)
beatq.tech;nc-ph-2432.beatq.tech:users.graph_title Logged in users
beatq.tech;nc-ph-2432.beatq.tech:users.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:users.graph_vlabel Users
beatq.tech;nc-ph-2432.beatq.tech:users.graph_scale no
beatq.tech;nc-ph-2432.beatq.tech:users.graph_category system
beatq.tech;nc-ph-2432.beatq.tech:users.graph_printf %3.0lf
beatq.tech;nc-ph-2432.beatq.tech:users.graph_order tty pty pts X other
beatq.tech;nc-ph-2432.beatq.tech:users.pts.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:users.pts.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:users.pts.colour 00FFFF
beatq.tech;nc-ph-2432.beatq.tech:users.pts.draw AREASTACK
beatq.tech;nc-ph-2432.beatq.tech:users.pts.label pts
beatq.tech;nc-ph-2432.beatq.tech:users.X.info Users logged in on an X display
beatq.tech;nc-ph-2432.beatq.tech:users.X.draw AREASTACK
beatq.tech;nc-ph-2432.beatq.tech:users.X.label X displays
beatq.tech;nc-ph-2432.beatq.tech:users.X.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:users.X.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:users.X.colour 000000
beatq.tech;nc-ph-2432.beatq.tech:users.other.label Other users
beatq.tech;nc-ph-2432.beatq.tech:users.other.info Users logged in by indeterminate method
beatq.tech;nc-ph-2432.beatq.tech:users.other.colour FF0000
beatq.tech;nc-ph-2432.beatq.tech:users.other.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:users.other.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:users.pty.label pty
beatq.tech;nc-ph-2432.beatq.tech:users.pty.draw AREASTACK
beatq.tech;nc-ph-2432.beatq.tech:users.pty.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:users.pty.colour 0000FF
beatq.tech;nc-ph-2432.beatq.tech:users.pty.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:users.tty.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:users.tty.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:users.tty.colour 00FF00
beatq.tech;nc-ph-2432.beatq.tech:users.tty.draw AREASTACK
beatq.tech;nc-ph-2432.beatq.tech:users.tty.label tty
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.graph_title HDD temperature
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.graph_vlabel Degrees Celsius
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.graph_category sensors
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.graph_info This graph shows the temperature in degrees Celsius of the hard drives in the machine.
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.graph_order sda sdb
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.sdb.label sdb
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.sdb.warning 57
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.sdb.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.sdb.critical 60
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.sdb.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.sdb.max 100
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.sda.label sda
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.sda.max 100
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.sda.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.sda.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.sda.warning 57
beatq.tech;nc-ph-2432.beatq.tech:hddtemp_smartctl.sda.critical 60
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.graph_title Throughput per device
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.graph_args --base 1024
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.graph_vlabel Bytes/${graph_period} read (-) / write (+)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.graph_width 400
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.graph_info This graph shows averaged throughput for the given disk in bytes. Higher throughput is usualy linked with higher service time/latency (separate graph). The graph base is 1024 yeilding Kibi- and Mebi-bytes.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.graph_order loop0_rdbytes loop0_wrbytes md0_rdbytes md0_wrbytes md1_rdbytes md1_wrbytes sda_rdbytes sda_wrbytes sdb_rdbytes sdb_wrbytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0_rdbytes.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0_rdbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0_rdbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0_rdbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0_rdbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0_rdbytes.label loop0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0_rdbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1_wrbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1_wrbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1_wrbytes.negative md1_rdbytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1_wrbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1_wrbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1_wrbytes.label md1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1_wrbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1_rdbytes.label md1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1_rdbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1_rdbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1_rdbytes.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1_rdbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1_rdbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1_rdbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0_rdbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0_rdbytes.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0_rdbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0_rdbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0_rdbytes.label md0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0_rdbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0_rdbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0_wrbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0_wrbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0_wrbytes.label md0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0_wrbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0_wrbytes.negative md0_rdbytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0_wrbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0_wrbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda_wrbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda_wrbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda_wrbytes.label sda
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda_wrbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda_wrbytes.negative sda_rdbytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda_wrbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda_wrbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda_rdbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda_rdbytes.label sda
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda_rdbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda_rdbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda_rdbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda_rdbytes.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sda_rdbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0_wrbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0_wrbytes.negative loop0_rdbytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0_wrbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0_wrbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0_wrbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0_wrbytes.label loop0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.loop0_wrbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb_wrbytes.negative sdb_rdbytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb_wrbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb_wrbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb_wrbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb_wrbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb_wrbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb_wrbytes.label sdb
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb_rdbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb_rdbytes.label sdb
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb_rdbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb_rdbytes.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb_rdbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb_rdbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb_rdbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.graph_title Disk throughput for /dev/md0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.graph_args --base 1024
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.graph_vlabel Pr ${graph_period} read (-) / write (+)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.graph_info This graph shows disk throughput in bytes pr ${graph_period}. The graph base is 1024 so KB is for Kibi bytes and so on.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.graph_order rdbytes wrbytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.wrbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.wrbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.wrbytes.negative rdbytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.wrbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.wrbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.wrbytes.label Bytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.wrbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.rdbytes.label invisible
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.rdbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.rdbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.rdbytes.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.rdbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.rdbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md0.rdbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda.graph_title Disk utilization for /dev/sda
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda.graph_vlabel % busy
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda.graph_scale no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda.graph_order util
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda.util.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda.util.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda.util.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda.util.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda.util.label Utilization
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sda.util.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.graph_title Exim Mailqueue
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.graph_vlabel mails in queue
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.graph_category exim
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.graph_order mails frozen
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.frozen.critical 0:200
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.frozen.colour 0022FF
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.frozen.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.frozen.warning 0:100
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.frozen.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.frozen.label frozen mails
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.frozen.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.mails.draw AREA
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.mails.label queued mails
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.mails.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.mails.warning 0:100
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.mails.critical 0:200
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.mails.colour 00AA00
beatq.tech;nc-ph-2432.beatq.tech:exim_mailqueue.mails.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.graph_title Firewall Throughput
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.graph_vlabel Packets/${graph_period}
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.graph_category network
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.graph_order received forwarded
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.forwarded.label Forwarded
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.forwarded.draw LINE2
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.forwarded.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.forwarded.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.forwarded.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.forwarded.min 0
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.received.draw AREA
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.received.label Received
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.received.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.received.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.received.min 0
beatq.tech;nc-ph-2432.beatq.tech:fw_packets.received.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.graph_title Average latency for /dev/loop0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.graph_args --base 1000 --logarithmic
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.graph_vlabel seconds
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.graph_info This graph shows average waiting time/latency for different categories of disk operations. The times that include the queue times indicate how busy your system is. If the waiting time hits 1 second then your I/O system is 100% busy.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.graph_order svctm avgwait avgrdwait avgwrwait
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgwrwait.label Write IO Wait time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgwrwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgwrwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgwrwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgwrwait.warning 0:3
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgwrwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgwrwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgrdwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgrdwait.label Read IO Wait time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgrdwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgrdwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgrdwait.warning 0:3
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgrdwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgrdwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.svctm.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.svctm.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.svctm.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.svctm.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.svctm.label Device IO time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.svctm.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgwait.label IO Wait time
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0.avgwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:processes.graph_title Processes
beatq.tech;nc-ph-2432.beatq.tech:processes.graph_info This graph shows the number of processes
beatq.tech;nc-ph-2432.beatq.tech:processes.graph_category processes
beatq.tech;nc-ph-2432.beatq.tech:processes.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:processes.graph_vlabel Number of processes
beatq.tech;nc-ph-2432.beatq.tech:processes.graph_order sleeping idle stopped zombie dead paging uninterruptible runnable processes dead paging idle sleeping uninterruptible stopped runnable zombie processes
beatq.tech;nc-ph-2432.beatq.tech:processes.uninterruptible.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:processes.uninterruptible.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:processes.uninterruptible.colour ffa500
beatq.tech;nc-ph-2432.beatq.tech:processes.uninterruptible.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:processes.uninterruptible.info The number of uninterruptible processes (usually IO).
beatq.tech;nc-ph-2432.beatq.tech:processes.uninterruptible.label uninterruptible
beatq.tech;nc-ph-2432.beatq.tech:processes.stopped.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:processes.stopped.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:processes.stopped.colour cc0000
beatq.tech;nc-ph-2432.beatq.tech:processes.stopped.info The number of stopped or traced processes.
beatq.tech;nc-ph-2432.beatq.tech:processes.stopped.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:processes.stopped.label stopped
beatq.tech;nc-ph-2432.beatq.tech:processes.zombie.label zombie
beatq.tech;nc-ph-2432.beatq.tech:processes.zombie.info The number of defunct ('zombie') processes (process terminated and parent not waiting).
beatq.tech;nc-ph-2432.beatq.tech:processes.zombie.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:processes.zombie.colour 990000
beatq.tech;nc-ph-2432.beatq.tech:processes.zombie.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:processes.zombie.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:processes.idle.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:processes.idle.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:processes.idle.colour 4169e1
beatq.tech;nc-ph-2432.beatq.tech:processes.idle.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:processes.idle.info The number of idle kernel threads (>= 4.2 kernels only).
beatq.tech;nc-ph-2432.beatq.tech:processes.idle.label idle
beatq.tech;nc-ph-2432.beatq.tech:processes.runnable.label runnable
beatq.tech;nc-ph-2432.beatq.tech:processes.runnable.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:processes.runnable.info The number of runnable processes (on the run queue).
beatq.tech;nc-ph-2432.beatq.tech:processes.runnable.colour 22ff22
beatq.tech;nc-ph-2432.beatq.tech:processes.runnable.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:processes.runnable.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:processes.paging.label paging
beatq.tech;nc-ph-2432.beatq.tech:processes.paging.info The number of paging processes (<2.6 kernels only).
beatq.tech;nc-ph-2432.beatq.tech:processes.paging.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:processes.paging.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:processes.paging.colour 00aaaa
beatq.tech;nc-ph-2432.beatq.tech:processes.paging.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:processes.dead.label dead
beatq.tech;nc-ph-2432.beatq.tech:processes.dead.info The number of dead processes.
beatq.tech;nc-ph-2432.beatq.tech:processes.dead.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:processes.dead.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:processes.dead.colour ff0000
beatq.tech;nc-ph-2432.beatq.tech:processes.dead.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:processes.sleeping.info The number of sleeping processes.
beatq.tech;nc-ph-2432.beatq.tech:processes.sleeping.draw AREA
beatq.tech;nc-ph-2432.beatq.tech:processes.sleeping.label sleeping
beatq.tech;nc-ph-2432.beatq.tech:processes.sleeping.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:processes.sleeping.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:processes.sleeping.colour 0022ff
beatq.tech;nc-ph-2432.beatq.tech:processes.processes.label total
beatq.tech;nc-ph-2432.beatq.tech:processes.processes.info The total number of processes.
beatq.tech;nc-ph-2432.beatq.tech:processes.processes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:processes.processes.colour c0c0c0
beatq.tech;nc-ph-2432.beatq.tech:processes.processes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:processes.processes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.graph_title Disk latency per device
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.graph_args --base 1000
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.graph_vlabel Average IO Wait (seconds)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.graph_width 400
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.graph_order loop0_avgwait md0_avgwait md1_avgwait sda_avgwait sdb_avgwait
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb_avgwait.label sdb
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb_avgwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb_avgwait.info Average wait time for an I/O request
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb_avgwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb_avgwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb_avgwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sdb_avgwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda_avgwait.info Average wait time for an I/O request
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda_avgwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda_avgwait.label sda
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda_avgwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda_avgwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda_avgwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.sda_avgwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0_avgwait.info Average wait time for an I/O request
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0_avgwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0_avgwait.label md0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0_avgwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0_avgwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0_avgwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md0_avgwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0_avgwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0_avgwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0_avgwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0_avgwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0_avgwait.label loop0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0_avgwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.loop0_avgwait.info Average wait time for an I/O request
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1_avgwait.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1_avgwait.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1_avgwait.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1_avgwait.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1_avgwait.label md1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1_avgwait.info Average wait time for an I/O request
beatq.tech;nc-ph-2432.beatq.tech:diskstats_latency.md1_avgwait.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.graph_title Disk throughput for /dev/sdb
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.graph_args --base 1024
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.graph_vlabel Pr ${graph_period} read (-) / write (+)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.graph_info This graph shows disk throughput in bytes pr ${graph_period}. The graph base is 1024 so KB is for Kibi bytes and so on.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.graph_order rdbytes wrbytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.rdbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.rdbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.rdbytes.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.rdbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.rdbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.rdbytes.label invisible
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.rdbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.wrbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.wrbytes.negative rdbytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.wrbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.wrbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.wrbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.wrbytes.label Bytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.sdb.wrbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.graph_title Apache processes
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.graph_category apache
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.graph_order busy81 idle81 busy81 idle81 free81
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.graph_vlabel processes
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.graph_total total
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.idle81.label idle servers 81
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.idle81.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.idle81.colour 0033ff
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.idle81.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.idle81.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.free81.label free slots 81
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.free81.draw STACK
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.free81.colour ccff00
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.free81.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.free81.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.busy81.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.busy81.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.busy81.colour 33cc00
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.busy81.draw AREA
beatq.tech;nc-ph-2432.beatq.tech:apache_processes.busy81.label busy servers 81
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.graph_title CPU frequency scaling
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.graph_args --base 1000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.graph_category system
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.graph_vlabel Hz
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.graph_info This graph shows the current speed of the CPU at the time of the data retrieval (not its average). This is a limitiation of the 'intel_pstate' driver.
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.graph_order cpu0 cpu1 cpu2 cpu3 cpu4 cpu5 cpu6 cpu7 cpu8 cpu9 cpu10 cpu11
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu8.min 800000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu8.max 5170000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu8.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu8.cdef cpu8,1000,*
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu8.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu8.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu8.label CPU 8
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu9.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu9.label CPU 9
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu9.cdef cpu9,1000,*
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu9.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu9.max 5170000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu9.min 800000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu9.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu6.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu6.cdef cpu6,1000,*
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu6.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu6.max 5170000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu6.min 800000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu6.label CPU 6
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu6.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu4.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu4.label CPU 4
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu4.min 800000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu4.max 5170000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu4.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu4.cdef cpu4,1000,*
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu4.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu2.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu2.cdef cpu2,1000,*
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu2.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu2.min 800000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu2.max 5170000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu2.label CPU 2
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu2.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu7.label CPU 7
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu7.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu7.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu7.cdef cpu7,1000,*
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu7.min 800000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu7.max 5170000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu7.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu3.label CPU 3
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu3.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu3.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu3.cdef cpu3,1000,*
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu3.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu3.max 5170000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu3.min 800000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu10.label CPU 10
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu10.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu10.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu10.max 5170000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu10.min 800000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu10.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu10.cdef cpu10,1000,*
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu5.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu5.label CPU 5
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu5.cdef cpu5,1000,*
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu5.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu5.max 5170000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu5.min 800000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu5.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu11.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu11.max 5170000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu11.min 800000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu11.cdef cpu11,1000,*
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu11.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu11.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu11.label CPU 11
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu1.min 800000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu1.max 5170000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu1.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu1.cdef cpu1,1000,*
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu1.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu1.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu1.label CPU 1
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu0.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu0.label CPU 0
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu0.max 5170000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu0.min 800000
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu0.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu0.cdef cpu0,1000,*
beatq.tech;nc-ph-2432.beatq.tech:cpuspeed.cpu0.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.graph_title MySQL throughput
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.graph_args --base 1024
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.graph_vlabel bytes received (-) / sent (+) per ${graph_period}
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.graph_info Note that this is a old plugin which is no longer installed by default. It is retained for compatability with old installations.
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.graph_category mysql
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.graph_order recv sent
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.sent.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.sent.label transfer rate
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.sent.draw LINE2
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.sent.max 80000000
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.sent.min 0
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.sent.negative recv
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.sent.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.sent.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.recv.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.recv.draw LINE2
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.recv.label transfer rate
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.recv.max 80000000
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.recv.min 0
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.recv.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.recv.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:mysql_bytes.recv.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.graph_title Disk IOs per device
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.graph_args --base 1000
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.graph_vlabel IOs/${graph_period} read (-) / write (+)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.graph_width 400
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.graph_order loop0_rdio loop0_wrio md0_rdio md0_wrio md1_rdio md1_wrio sda_rdio sda_wrio sdb_rdio sdb_wrio
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0_rdio.label loop0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0_rdio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0_rdio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0_rdio.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0_rdio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0_rdio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0_rdio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1_wrio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1_wrio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1_wrio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1_wrio.negative md1_rdio
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1_wrio.label md1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1_wrio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1_wrio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1_rdio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1_rdio.label md1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1_rdio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1_rdio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1_rdio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1_rdio.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md1_rdio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0_rdio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0_rdio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0_rdio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0_rdio.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0_rdio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0_rdio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0_rdio.label md0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0_wrio.negative md0_rdio
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0_wrio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0_wrio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0_wrio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0_wrio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0_wrio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.md0_wrio.label md0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda_wrio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda_wrio.label sda
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda_wrio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda_wrio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda_wrio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda_wrio.negative sda_rdio
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda_wrio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda_rdio.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda_rdio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda_rdio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda_rdio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda_rdio.label sda
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda_rdio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sda_rdio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0_wrio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0_wrio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0_wrio.negative loop0_rdio
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0_wrio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0_wrio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0_wrio.label loop0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.loop0_wrio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb_rdio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb_rdio.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb_rdio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb_rdio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb_rdio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb_rdio.label sdb
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb_rdio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb_wrio.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb_wrio.negative sdb_rdio
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb_wrio.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb_wrio.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb_wrio.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb_wrio.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_iops.sdb_wrio.label sdb
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb.graph_title Disk utilization for /dev/sdb
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb.graph_vlabel % busy
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb.graph_scale no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb.graph_order util
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb.util.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb.util.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb.util.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb.util.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb.util.label Utilization
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_utilization.sdb.util.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:forks.graph_title Fork rate
beatq.tech;nc-ph-2432.beatq.tech:forks.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:forks.graph_vlabel forks / ${graph_period}
beatq.tech;nc-ph-2432.beatq.tech:forks.graph_category processes
beatq.tech;nc-ph-2432.beatq.tech:forks.graph_info This graph shows the number of forks (new processes started) per second.
beatq.tech;nc-ph-2432.beatq.tech:forks.graph_order forks
beatq.tech;nc-ph-2432.beatq.tech:forks.forks.max 100000
beatq.tech;nc-ph-2432.beatq.tech:forks.forks.min 0
beatq.tech;nc-ph-2432.beatq.tech:forks.forks.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:forks.forks.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:forks.forks.type DERIVE
beatq.tech;nc-ph-2432.beatq.tech:forks.forks.info The number of forks per second.
beatq.tech;nc-ph-2432.beatq.tech:forks.forks.label forks
beatq.tech;nc-ph-2432.beatq.tech:uptime.graph_title Uptime
beatq.tech;nc-ph-2432.beatq.tech:uptime.graph_args --base 1000 -l 0
beatq.tech;nc-ph-2432.beatq.tech:uptime.graph_scale no
beatq.tech;nc-ph-2432.beatq.tech:uptime.graph_vlabel uptime in days
beatq.tech;nc-ph-2432.beatq.tech:uptime.graph_category system
beatq.tech;nc-ph-2432.beatq.tech:uptime.graph_order uptime
beatq.tech;nc-ph-2432.beatq.tech:uptime.uptime.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:uptime.uptime.draw AREA
beatq.tech;nc-ph-2432.beatq.tech:uptime.uptime.label uptime
beatq.tech;nc-ph-2432.beatq.tech:uptime.uptime.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.graph_title Disk throughput for /dev/md1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.graph_args --base 1024
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.graph_vlabel Pr ${graph_period} read (-) / write (+)
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.graph_category disk
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.graph_info This graph shows disk throughput in bytes pr ${graph_period}. The graph base is 1024 so KB is for Kibi bytes and so on.
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.graph_order rdbytes wrbytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.wrbytes.type GAUGE
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.wrbytes.label Bytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.wrbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.wrbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.wrbytes.negative rdbytes
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.wrbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.wrbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.rdbytes.graph no
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.rdbytes.graph_data_size normal
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.rdbytes.min 0
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.rdbytes.update_rate 300
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.rdbytes.draw LINE1
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.rdbytes.label invisible
beatq.tech;nc-ph-2432.beatq.tech:diskstats_throughput.md1.rdbytes.type GAUGE
Anons79 File Manager Version 1.0, Coded By Anons79
Email: [email protected]