To confirm that statistics are being collected on each host, navigate to ~http://host_url:9100~. A page entitled Node Exporter should be displayed containing a link for Metrics. Click the link and confirm that statistics are being collected.
Note that each node_exporter host must be accessible through the firewall on port 9100. Firewalld can be configured for the ~internal~ zone on each host.
Note: I have to configure the ~internal~ zone on Firewalld to allow traffic from my IP address on ports HTTP, HTTPS, SSH, and 1965 in order to access, for example, my web services on the node_exporter host.
As of FreeBSD 14.1-RELEASE, the version of Node Exporter available, v1.6.1, is outdated. To install the latest version, ensure the ports tree is checked out before running the commands below.
The job ~remote_collector~ scrapes metrics from each of the hosts running the node_exporter. Ensure that port ~9100~ is open in the firewall, and if it is a public-facing node, ensure that port ~9100~ can only be accessed from my IP address.
** Configure Prometheus to monitor qBittorrent client nodes
For each qBittorrent instance you want to monitor, setup a Docker or Podman container with [[https://github.com/caseyscarborough/qbittorrent-exporter]]. The containers will run on the machine running Prometheus so they are accessible at localhost. Let's say I have three qBittorrent instances I want to monitor.
Add the ~metrics~ global option and ensure the admin endpoint is enabled.
#+BEGIN_SRC caddyfile
{
admin 0.0.0.0:2019
servers {
metrics
}
}
#+END_SRC
Restart Caddy:
#+BEGIN_SRC shell
sudo systemctl restart caddy
sudo systemctl status caddy
#+END_SRC
*** Caddy: logs activation
I have my Caddy configuration modularized with ~/etc/caddy/Caddyfile~ being the central file. It looks something like this:
#+BEGIN_SRC caddyfile
{
admin 0.0.0.0:2019
servers {
metrics
}
}
## hyperreal.coffee
import /etc/caddy/anonoverflow.caddy
import /etc/caddy/breezewiki.caddy
import /etc/caddy/cdn.caddy
...
#+END_SRC
Each file that is imported is a virtual host that has its own separate configuration and corresponds to a subdomain of hyperreal.coffee. I have logging disabled on most of them except the ones for which troubleshooting with logs would be convenient, such as the one for my Mastodon instance. For ~/etc/caddy/fedi.caddy~, I've added these lines to enable logging:
#+BEGIN_SRC caddyfile
fedi.hyperreal.coffee {
log {
output file /var/log/caddy/fedi.log {
roll_size 100MiB
roll_keep 5
roll_keep_for 100d
}
format json
level INFO
}
}
#+END_SRC
Restart caddy.
#+BEGIN_SRC shell
sudo systemctl restart caddy
sudo systemctl status caddy
#+END_SRC
Ensure port ~2019~ can only be accessed by my IP address, using Firewalld's internal zone:
Add the Caddy configuration to the ~scrape_configs~ section of ~/etc/prometheus/prometheus.yml~:
#+BEGIN_SRC yaml
- job_name: "caddy"
static_configs:
- targets: ["hyperreal.coffee:2019"]
#+END_SRC
Restart Prometheus on the monitor host:
#+BEGIN_SRC shell
sudo systemctl restart prometheus.service
#+END_SRC
*** Loki and Promtail setup
On the node running Caddy, install the loki and promtail packages:
#+BEGIN_SRC shell
sudo apt install -y loki promtail
#+END_SRC
Edit the Promtail configuration file at ~/etc/promtail/config.yml~:
#+BEGIN_SRC yaml
- job_name: caddy
static_configs:
- targets:
- localhost
labels:
job: caddy
__path__: /var/log/caddy/*.log
agent: caddy-promtail
pipeline_stages:
- json:
expressions:
duration: duration
status: status
- labels:
duration:
status:
#+END_SRC
The entire Promtail configuration should look like this:
#+BEGIN_SRC yaml
# This minimal config scrape only single log file.
# Primarily used in rpm/deb packaging where promtail service can be started during system init process.
# And too much scraping during init process can overload the complete system.
# https://github.com/grafana/loki/issues/11398
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://localhost:3100/loki/api/v1/push
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
#NOTE: Need to be modified to scrape any additional logs of the system.
__path__: /var/log/messages
- job_name: caddy
static_configs:
- targets:
- localhost
labels:
job: caddy
__path__: /var/log/caddy/*log
agent: caddy-promtail
pipeline_stages:
- json:
expressions:
duration: duration
status: status
- labels:
duration:
status:
#+END_SRC
Restart Promtail and Loki services:
#+BEGIN_SRC shell
sudo systemctl restart promtail
sudo systemctl restart loki
#+END_SRC
To ensure that the promtail user has permissions to read caddy logs:
#+BEGIN_SRC shell
sudo usermod -aG caddy promtail
sudo chmod g+r /var/log/caddy/*.log
#+END_SRC
The [[http://localhost:9090/targets][Prometheus dashboard]] should now show the Caddy target with a state of "UP".
** Monitor TOR node
Edit ~/etc/tor/torrc~ to add Metrics info. ~x.x.x.x~ is the IP address where Prometheus is running.
#+BEGIN_SRC shell
## Prometheus exporter
MetricsPort 0.0.0.0:9035 prometheus
MetricsPortPolicy accept x.x.x.x
#+END_SRC
Configure FirewallD to allow inbound traffic to port ~9035~ on the internal zone. Ensure the internal zone's source is the IP address of the server where Prometheus is running. Ensure port ~443~ is accessible from the Internet on FirewallD's public zone.
Edit ~/etc/prometheus/prometheus.yml~ to add the TOR config. ~y.y.y.y~ is the IP address where TOR is running.
#+BEGIN_SRC yaml
scrape_configs:
- job_name: "tor-relay"
static_configs:
- targets: ["y.y.y.y:9035"]
#+END_SRC
Restart Prometheus.
#+BEGIN_SRC shell
sudo systemctl restart prometheus.service
#+END_SRC
Go to Grafana and import [[https://files.hyperreal.coffee/grafana/tor_stats.json][tor_stats.json]] as a new dashboard, using the Prometheus datasource.
** Monitor Synapse homeserver
On the server running Synapase, edit ~/etc/matrix-synapse/homeserver.yaml~ to enable metrics.
#+BEGIN_SRC yaml
enable_metrics: true
#+END_SRC
Add a new listener to ~/etc/matrix-synapse/homeserver.yaml~ for Prometheus metrics.
#+BEGIN_SRC yaml
listeners:
- port: 9400
type: metrics
bind_addresses: ['0.0.0.0']
#+END_SRC
On the server running Prometheus, add a target for Synapse.
#+BEGIN_SRC yaml
- job_name: "synapse"
scrape_interval: 1m
metrics_path: "/_synapse/metrics"
static_configs:
- targets: ["hyperreal:9400"]
#+END_SRC
Also add the Synapse recording rules.
#+BEGIN_SRC yaml
rule_files:
- /etc/prometheus/synapse-v2.rules
#+END_SRC
On the server running Prometheus, download the Synapse recording rules.
On the host running Elasticsearch, download the latest binary from the GitHub [[https://github.com/prometheus-community/elasticsearch_exporter/releases][releases]].