What it is

node-exporter is the official Prometheus exporter for Linux/BSD host metrics.

It reads from /proc, /sys, and standard kernel interfaces, and exposes ~500 metrics about the host on :9100/metrics: CPU, memory, disk usage, filesystem space, network traffic, load average, file descriptors, NTP drift, hwmon sensors, and so on.

It’s the default first exporter anyone installs on a server.

Lightweight (~10 MB RAM, ~0% CPU at idle), zero dependencies, ships in every distro’s repos.

1. Installation (Ubuntu/Debian)

apt update
apt install prometheus-node-exporter

The package:

  • Installs the binary /usr/bin/prometheus-node-exporter.
  • Creates a systemd unit prometheus-node-exporter.service, enabled and started by default.
  • Listens on 0.0.0.0:9100 by default (that’s a problem, but we’ll fix it in the hardening section).

Verify it’s running:

systemctl status prometheus-node-exporter

2. Validation

curl -s http://localhost:9100/metrics | head -30

Congratulations: your server is now exposing metrics.

The scraper hasn’t been built yet: that’s the next step, but the data is available.

What’s inside /metrics

node-exporter is organised into collectors, each responsible for a category of metrics.

By default ~30 are enabled (the “out of the box” set), others are opt-in.

A few useful ones to know:

CollectorEnabled by default?What it exposes
cpuyesCPU time per mode (user/system/idle/iowait/…)
meminfoyesRAM usage, swap, buffers, cached
diskstatsyesI/O per device (reads, writes, IOPS, latency)
filesystemyesPer-mount free space, inodes
netdevyesPer-interface RX/TX bytes & packets
loadavgyes1/5/15-minute load
unameyeskernel, hostname
systemdnoStatus of every systemd unit (active/failed/…)
processesnoPer-process count by state
textfileyesCustom metrics from .prom files in a directory — your “escape hatch” for ad-hoc metrics

To enable systemd (very useful, tells you the number of failed units via node_systemd_units metrics):

nano /etc/default/prometheus-node-exporter

Change ARGS to add --collector.systemd:

ARGS="--web.listen-address=<VPS_PRIVATE_IP>:9100 --collector.systemd"

Restart, recheck curl, you’ll now see node_systemd_unit_state{...} entries.

3. Hardening

By default node-exporter listens on all interfaces (0.0.0.0:9100).

On a server with a public IP, that means anyone on the Internet can do curl http://<your-public-ip>:9100/metrics and read:

  • All your mount points and disk usage
  • All your network interfaces and IPs
  • All your running processes (if the --collector.processes flag is enabled)
  • Last reboot time, uptime, hardware info, kernel version

That’s a leak.

Two complementary fixes: apply both for defense in depth.

1. Binding to LAN only

Edit the package defaults file:

nano /etc/default/prometheus-node-exporter

Change the ARGS line to bind to the VPS’s private LAN IP:

ARGS="--web.listen-address=<VPS_PRIVATE_IP>:9100"

Then restart:

systemctl restart prometheus-node-exporter

Verify the listen address has changed:

ss -tlnp | grep 9100

You should now see node-exporter bound only to the private IP (e.g. 10.0.0.5:9100), not 0.0.0.0:9100.

From the public interface, port 9100 is now invisible.

2. Firewall rule

Even with the bind restricted, add an explicit firewall rule so that if the bind config ever drifts (you remove the flag, package update overwrites it…) the leak doesn’t reappear.

With iptables (typical Ubuntu server setup, persisted by netfilter-persistent):

# Allow scrape from the LAN (vmagent01) — adjust to your scraper's IP
iptables -A INPUT -p tcp --dport 9100 -s <VMAGENT01_PRIVATE_IP> -j ACCEPT
# Drop from anywhere else (your default INPUT policy should already be DROP,
# this is an explicit safety net)
iptables -A INPUT -p tcp --dport 9100 -j DROP
 
# Persist the rule across reboots
netfilter-persistent save

After this, only your vmagent (via its LAN IP) can reach :9100.

The wider Internet sees nothing.

4. The next steps

Now that the VPS exposes metrics, we need:

  1. A place to store them: VictoriaMetrics
  2. Someone to come pick them up: VMAgent