mirror of
https://codeberg.org/hyperreal/techne
synced 2024-11-25 07:43:42 +01:00
Push content
This commit is contained in:
parent
f22f186304
commit
0d621c9b78
3
README.org
Normal file
3
README.org
Normal file
@ -0,0 +1,3 @@
|
||||
#+TITLE: Techne: A practical knowledge base
|
||||
|
||||
This is a collection of notes on various tech and other things. Intended audience: mostly me.
|
39
aide.org
Normal file
39
aide.org
Normal file
@ -0,0 +1,39 @@
|
||||
#+title: Aide
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Configure AIDE
|
||||
Edit ~/etc/aide/aide.conf~. Enable the following options:
|
||||
#+BEGIN_SRC shell
|
||||
report_summarize_changes=true
|
||||
#+END_SRC
|
||||
|
||||
** Initialize the database
|
||||
#+BEGIN_SRC shell
|
||||
sudo aide --config /etc/aide/aide.conf --init
|
||||
#+END_SRC
|
||||
|
||||
AIDE will indicate the location of the new database when it finishes:
|
||||
#+BEGIN_SRC shell
|
||||
New AIDE database written to /var/lib/aide/aide.db.new
|
||||
#+END_SRC
|
||||
|
||||
Rename the file:
|
||||
#+BEGIN_SRC shell
|
||||
sudo mv /var/lib/aide/aide.db.new /var/lib/aide/aide.db
|
||||
#+END_SRC
|
||||
|
||||
** Trigger a check
|
||||
#+BEGIN_SRC shell
|
||||
sudo aide --check --config /etc/aide/aide.conf
|
||||
#+END_SRC
|
||||
|
||||
** Crontab
|
||||
#+BEGIN_SRC shell
|
||||
0 3 * * * aide --check --config /etc/aide/aide.conf
|
||||
#+END_SRC
|
||||
|
||||
** Update
|
||||
Run AIDE after editing system files and install system updates or new packages so that AIDE can update their checksums in the AIDE database. This will help prevent false positives.
|
||||
#+BEGIN_SRC shell
|
||||
sudo aide --update --config /etc/aide/aide.conf
|
||||
#+END_SRC
|
19
atop.org
Normal file
19
atop.org
Normal file
@ -0,0 +1,19 @@
|
||||
#+title: Atop
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Get lowest memfree for given analysis date
|
||||
#+BEGIN_SRC bash
|
||||
atopsar -r /var/log/atop/atop_20240703 -m -R 1 | awk 'NR<7{print $0;next}{print $0| "sort -k 3,4"}' | head -11
|
||||
#+END_SRC
|
||||
|
||||
- ~atopsar~ : atop's system activity report.
|
||||
- ~-r /var/log/atop/atop_20240703~ : Log file to use.
|
||||
- ~-m~ : Memory- and swap-occupation
|
||||
- ~-R 1~ : Summarize 1 sample into one sample. Log file contains samples of 10 minutes, so this will summarize each sample. ~-R 6~ will summarize one sample per 60 minutes.
|
||||
- ~awk 'NR<7{print $0;next}{print $0| "sort -k 3,4"}'~ : For number of input records (~NR~) less than ~7~, ~print~ the input record (~$0~), go to the ~next~ input record and repeat the ~{print $0}~ pattern until the end is reached, then execute the END rule. The END rule in this case is ~{print $0| "sort -k 3,4"}~, it prints the remaining input records after piping them through the ~"sort -k 3,4"~ command. This avoids sorting the first 7 lines of the atopsar command.
|
||||
- ~head -11~ : Get the top 11 lines of output.
|
||||
|
||||
** Get top 3 memory processes for given analysis date
|
||||
#+BEGIN_SRC bash
|
||||
atopsar -G -r /var/log/atop/atop_20240710
|
||||
#+END_SRC
|
34
bash.org
Normal file
34
bash.org
Normal file
@ -0,0 +1,34 @@
|
||||
#+title: Bash
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Split large text file into smaller files with equal number of lines
|
||||
#+begin_src shell
|
||||
split -l 60 bigfile.txt prefix-
|
||||
#+end_src
|
||||
|
||||
** Loop through lines of file
|
||||
#+begin_src shell
|
||||
while read line; do
|
||||
echo "$line";
|
||||
done </path/to/file.txt
|
||||
#+end_src
|
||||
|
||||
** Use grep to find URLs from HTML file
|
||||
|
||||
#+begin_src shell
|
||||
cat urls.html | grep -Eo "(http|https)://[a-zA-Z0-9./?=_%:-]*"
|
||||
#+end_src
|
||||
|
||||
- ~grep -E~: egrep
|
||||
- ~grep -o~: only output what has been grepped
|
||||
- ~(http|https)~: either http OR https
|
||||
- ~a-zA-Z0-9~: match all lowercase, uppercase, and digits
|
||||
- ~.~: match period
|
||||
- ~/~: match slash
|
||||
- ~?~: match ?
|
||||
- ~=~: match =
|
||||
- ~_~: match underscore
|
||||
- ~%~: match percent
|
||||
- ~:~: match colon
|
||||
- ~-~: match dash
|
||||
- ~*~: repeat the [...] group any number of times
|
88
btrbk.org
Normal file
88
btrbk.org
Normal file
@ -0,0 +1,88 @@
|
||||
#+title: Btrbk
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** On the host machine
|
||||
#+begin_quote
|
||||
Run these commands as root
|
||||
#+end_quote
|
||||
|
||||
Add a system user for btrbk:
|
||||
#+begin_src shell
|
||||
useradd -c "Btrbk user" -m -r -s /bin/bash -U btrbk
|
||||
#+end_src
|
||||
|
||||
Setup sudo for btrbk:
|
||||
#+begin_src shell
|
||||
echo "btrbk ALL=NOPASSWD:/usr/sbin/btrfs,/usr/bin/readlink,/usr/bin/test" | tee -a /etc/sudoers.d/btrbk
|
||||
#+end_src
|
||||
|
||||
Create a subvolume for each client:
|
||||
#+begin_src shell
|
||||
mount /dev/sda1 /mnt/storage
|
||||
btrfs subvolume create client_hostname
|
||||
#+end_src
|
||||
|
||||
** On each client machine
|
||||
Create a dedicated SSH key:
|
||||
#+begin_src shell
|
||||
mkdir -p /etc/btrbk/ssh
|
||||
ssh-keygen -t ed25519 -f /etc/btrbk/ssh/id_ed25519
|
||||
#+end_src
|
||||
|
||||
Add each client's SSH public key to ~/home/btrbk/.ssh/authorized_keys~ on the NAS machine:
|
||||
#+begin_src shell
|
||||
ssh-copy-id -i /etc/btrbk/ssh/id_ed25519 btrbk@nas.local
|
||||
#+end_src
|
||||
|
||||
Create ~/etc/btrbk/btrbk.conf~ on each client:
|
||||
#+begin_src shell
|
||||
transaction_log /var/log/btrbk.log
|
||||
snapshot_preserve_min latest
|
||||
target_preserve 24h 7d 1m 1y
|
||||
target_preserve_min 7d
|
||||
ssh_user btrbk
|
||||
ssh_identity /etc/btrbk/ssh/id_ed25519
|
||||
backend btrfs-progs-sudo
|
||||
snapshot_dir /btrbk_snapshots
|
||||
target ssh://nas.local/mnt/storage/<client hostname>
|
||||
subvolume /
|
||||
subvolume /home
|
||||
snapshot_create ondemand
|
||||
#+end_src
|
||||
|
||||
Create directory to store btrbk snapshots on each client machine:
|
||||
#+begin_src shell
|
||||
mkdir /btrbk_snapshots
|
||||
#+end_src
|
||||
|
||||
Create ~/etc/systemd/system/btrbk.service~:
|
||||
#+begin_src systemd
|
||||
[Unit]
|
||||
Description=Daily btrbk backup
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
ExecStart=/usr/bin/btrbk -q -c /etc/btrbk/btrbk.conf run
|
||||
#+end_src
|
||||
|
||||
Create ~/etc/systemd/system/btrbk.timer~:
|
||||
#+begin_src systemd
|
||||
[Unit]
|
||||
Description=Daily btrbk backup
|
||||
|
||||
[Timer]
|
||||
OnCalendar=*-*-* 23:00:00
|
||||
Persistent=true
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
#+end_src
|
||||
|
||||
Alternatively, create a shell script to be placed under ~/etc/cron.daily~:
|
||||
#+begin_src shell
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
|
||||
/usr/bin/btrbk -q -c /etc/btrbk/btrbk.conf run >/dev/null
|
||||
#+end_src
|
89
btrfs.org
Normal file
89
btrfs.org
Normal file
@ -0,0 +1,89 @@
|
||||
#+title: Btrfs
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Setup encrypted external drive for backups
|
||||
*** Prepare the external drive
|
||||
#+begin_src shell
|
||||
sudo cryptsetup --type luks2 -y -v luksFormat /dev/sda1
|
||||
sudo cryptsetup -v luksOpen /dev/sda1 cryptbackup
|
||||
sudo mkfs.btrfs /dev/mapper/cryptbackup
|
||||
sudo mkdir /srv/backup
|
||||
sudo mount -o noatime,compress=zstd:1 /dev/mapper/cryptbackup /srv/backup
|
||||
sudo restorecon -Rv /srv/backup
|
||||
#+end_src
|
||||
|
||||
*** Setup ~/etc/crypttab~
|
||||
#+begin_src shell
|
||||
sudo blkid -s UUID -o value /dev/sda1 | sudo tee -a /etc/crypttab
|
||||
#+end_src
|
||||
|
||||
Add the following line to ~/etc/crypttab~:
|
||||
#+begin_src shell
|
||||
cryptbackup UUID=<UUID of /dev/sda1> none discard
|
||||
#+end_src
|
||||
|
||||
*** Setup ~/etc/fstab~
|
||||
#+begin_src shell
|
||||
sudo blkid -s UUID -o value /dev/mapper/cryptbackup | sudo tee -a /etc/fstab
|
||||
#+end_src
|
||||
|
||||
Add the following line to ~/etc/fstab~:
|
||||
#+begin_src shell
|
||||
UUID=<UUID of /dev/mapper/cryptbackup> /srv/backup btrfs compress=zstd:1,nofail 0 0
|
||||
#+end_src
|
||||
|
||||
Reload the daemons:
|
||||
#+begin_src shell
|
||||
sudo systemctl daemon-reload
|
||||
#+end_src
|
||||
|
||||
Mount the filesystems:
|
||||
#+begin_src shell
|
||||
sudo mount -av
|
||||
#+end_src
|
||||
|
||||
*** btrfs-backup script
|
||||
#+begin_src bash
|
||||
#!/usr/bin/env bash
|
||||
|
||||
LOGFILE="/var/log/btrfs-backup.log"
|
||||
SNAP_DATE=$(date '+%Y-%m-%d_%H%M%S')
|
||||
|
||||
# Check if device is mounted
|
||||
if ! grep "/srv/backup" /etc/mtab >/dev/null; then
|
||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Backup device is not mounted." | tee -a "$LOGFILE"
|
||||
notify-send -i computer-fail "Backup device is not mounted"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
create_snapshot() {
|
||||
if ! btrfs subvolume snapshot -r "$1" "${1}/.snapshots/$2-$SNAP_DATE" >/dev/null; then
|
||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Error creating snapshot of $1" | tee -a "$LOGFILE"
|
||||
notify-send -i computer-fail "Error creating snapshot of $1"
|
||||
exit 1
|
||||
else
|
||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Create snapshot of $1: OK" | tee -a "$LOGFILE"
|
||||
fi
|
||||
}
|
||||
|
||||
send_snapshot() {
|
||||
mkdir -p "/srv/backup/$SNAP_DATE"
|
||||
if ! btrfs send -q "${1}/.snapshots/$2-$SNAP_DATE" | btrfs receive -q "/srv/backup/$SNAP_DATE"; then
|
||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Error sending snapshot of $1 to /srv/backup" | tee -a "$LOGFILE"
|
||||
notify-send -i computer-fail "Error sending snapshot of $1 to /srv/backup"
|
||||
exit 1
|
||||
else
|
||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Send snapshot of $1 to /srv/backup: OK" | tee -a "$LOGFILE"
|
||||
fi
|
||||
}
|
||||
|
||||
# Create root and home snapshots
|
||||
create_snapshot "/" "root"
|
||||
create_snapshot "/home" "home"
|
||||
|
||||
# Send root and home snapshots
|
||||
send_snapshot "/" "root"
|
||||
send_snapshot "/home" "home"
|
||||
#+end_src
|
||||
|
||||
Move/copy the script to ~/etc/cron.daily/btrfs-backup~.
|
15
caddy.org
Normal file
15
caddy.org
Normal file
@ -0,0 +1,15 @@
|
||||
#+title: Caddy
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** IP whitelist
|
||||
#+BEGIN_SRC caddyfile
|
||||
irc.hyperreal.coffee {
|
||||
@me {
|
||||
client_ip 1.2.3.4
|
||||
}
|
||||
handle @me {
|
||||
reverse_proxy localhost:9000
|
||||
}
|
||||
respond "You are attempting to access protected resources!" 403
|
||||
}
|
||||
#+END_SRC
|
138
cgit.org
Normal file
138
cgit.org
Normal file
@ -0,0 +1,138 @@
|
||||
#+title: Cgit
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Install Cgit with Caddy
|
||||
*** Dependencies
|
||||
[[https://github.com/caddyserver/xcaddy/releases][xcaddy]] package from releases page.
|
||||
|
||||
Install [[https://github.com/aksdb/caddy-cgi][caddy-cgi]].
|
||||
#+begin_src shell
|
||||
xcaddy build --with github.com/aksdb/caddy-cgi/v2
|
||||
#+end_src
|
||||
|
||||
Install remaining dependencies.
|
||||
#+begin_src shell
|
||||
sudo apt install gitolite3 cgit python-is-python3 python3-pygments python3-markdown docutils-common groff
|
||||
#+end_src
|
||||
|
||||
*** Configuration
|
||||
Make a git user.
|
||||
#+begin_src shell
|
||||
sudo adduser --system --shell /bin/bash --group --disabled-password --home /home/git git
|
||||
#+end_src
|
||||
|
||||
Configure gitolite for the git user in ~~/.gitolite.rc~.
|
||||
#+begin_src shell
|
||||
UMASK => 0027,
|
||||
GIT_CONFIG_KEYS => 'gitweb.description gitweb.owner gitweb.homepage gitweb.category',
|
||||
#+end_src
|
||||
|
||||
Add caddy user to the git group.
|
||||
#+begin_src shell
|
||||
sudo usermod -aG git caddy
|
||||
#+end_src
|
||||
|
||||
Configure cgit in ~/etc/cgitrc~.
|
||||
#+begin_src rc
|
||||
#
|
||||
# cgit config
|
||||
# see cgitrc(5) for details
|
||||
|
||||
css=/cgit/cgit.css
|
||||
logo=/cgit/cgit.png
|
||||
favicon=/cgit/favicon.ico
|
||||
|
||||
enable-index-links=1
|
||||
enable-commit-graph=1
|
||||
enable-log-filecount=1
|
||||
enable-log-linecount=1
|
||||
enable-git-config=1
|
||||
|
||||
branch-sort=age
|
||||
repository-sort=name
|
||||
|
||||
clone-url=https://git.hyperreal.coffee/$CGIT_REPO_URL git://git.hyperreal.coffee/$CGIT_REPO_URL ssh://git@git.hyperreal.coffee:$CGIT_REPO_URL
|
||||
|
||||
root-title=hyperreal.coffee Git repositories
|
||||
root-desc=Source code and configs for my projects
|
||||
|
||||
##
|
||||
## List of common mimetypes
|
||||
##
|
||||
mimetype.gif=image/gif
|
||||
mimetype.html=text/html
|
||||
mimetype.jpg=image/jpeg
|
||||
mimetype.jpeg=image/jpeg
|
||||
mimetype.pdf=application/pdf
|
||||
mimetype.png=image/png
|
||||
mimetype.svg=image/svg+xml
|
||||
|
||||
# Enable syntax highlighting
|
||||
source-filter=/usr/lib/cgit/filters/syntax-highlighting.py
|
||||
|
||||
# Format markdown, rst, manpages, text files, html files, and org files.
|
||||
about-filter=/usr/lib/cgit/filters/about-formatting.sh
|
||||
|
||||
##
|
||||
### Search for these files in the root of the default branch of repositories
|
||||
### for coming up with the about page:
|
||||
##
|
||||
readme=:README.md
|
||||
readme=:README.org
|
||||
|
||||
robots=noindex, nofollow
|
||||
|
||||
section=personal-config
|
||||
|
||||
repo.url=doom-emacs-config
|
||||
repo.path=/home/git/repositories/doom-emacs-config.git
|
||||
repo.desc=My Doom Emacs config
|
||||
#+end_src
|
||||
|
||||
*** org-mode README
|
||||
#+begin_quote
|
||||
Note: I haven't gotten this to work yet. :-(
|
||||
#+end_quote
|
||||
|
||||
#+begin_src shell
|
||||
git clone https://github.com/amartos/cgit-org2html.git
|
||||
cd cgit-org2html
|
||||
sudo cp -v org2html /usr/lib/cgit/filters/html-converters/
|
||||
sudo chmod +x /usr/lib/cgit/filters/html-converters/org2html
|
||||
#+end_src
|
||||
|
||||
Download [[https://gist.github.com/amartos/fbfa82af4ff33823c90acbf23f7a3f0e][blob-formatting.sh]].
|
||||
#+begin_src shell
|
||||
sudo cp -v blob-formatting.sh /usr/lib/cgit/filters/
|
||||
#+end_src
|
||||
|
||||
**** Catppuccin Mocha palette for org2html.css
|
||||
#+begin_src shell
|
||||
git clone https://github.com/amartos/cgit-org2html.git
|
||||
cd cgit-org2html/css
|
||||
#+end_src
|
||||
|
||||
Change the color variables to Catppuccin Mocha hex codes.
|
||||
#+begin_src scss
|
||||
$red: #f38ba8;
|
||||
$green: #a6e3a1;
|
||||
$orange: #fab387;
|
||||
$gray: #585b70;
|
||||
$yellow: #f9e2af;
|
||||
$cyan: #89dceb;
|
||||
$teal: #94e2d5;
|
||||
$black: #11111b;
|
||||
$white: #cdd6f4;
|
||||
$cream: #f2cdcd;
|
||||
#+end_src
|
||||
|
||||
Install sass.
|
||||
#+begin_src shell
|
||||
sudo apt install -y sass
|
||||
#+end_src
|
||||
|
||||
Generate org2html.css from the scss files, and copy the result to the cgit css directory.
|
||||
#+begin_src shell
|
||||
sass org2html.scss:org2html.css
|
||||
sudo cp -v org2html.css /usr/share/cgit/css/
|
||||
#+end_src
|
32
cts-accessibility.org
Normal file
32
cts-accessibility.org
Normal file
@ -0,0 +1,32 @@
|
||||
#+title: Carpal tunnel syndrome accessibility
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
I'm just playing with some ideas here regarding a carpal tunnel syndrome-friendly way to do everyday computing.
|
||||
|
||||
Given the limits that nature places on the number of possible ways of manipulating machines, at the current time it seems voice dictation is the only feasible alternative to typing and pointing and clicking. Is it possible to do what I usually do at my computer using 100% voice dictation?
|
||||
|
||||
I wouldn't use it for gaming, of course, but for things like web browsing, coding, writing/typing, and system administration tasks. I would need software, preferrably FOSS, that responds to voice commands.
|
||||
|
||||
** Web browsing
|
||||
Voice commands for web browsing would have to include something like the following:
|
||||
- "Scroll N pixels down the page"
|
||||
- "Refresh the page"
|
||||
- "Go to tab 6"
|
||||
- "Download the file at link 8"
|
||||
- "Go to www.duckduckgo.com"
|
||||
- "Open up the Bitwarden menu"
|
||||
- "Enter writing mode and compose a new Mastodon post"
|
||||
- "Enter writing mode and compose a reply to Mastodon timeline item 23"
|
||||
- "Play the video on Mastodon timeline item 28"
|
||||
- "Go to bookmark 16"
|
||||
- "Copy the URL to the system clipboard"
|
||||
|
||||
So there would have to be a way to enumerate web page and browser elements. This enumeration concept would also apply to many other apps.
|
||||
|
||||
** Coding and command line usage
|
||||
Voice commands that are mapped to:
|
||||
- shell commands and aliases
|
||||
- code snippets
|
||||
- "Create a Go function named helloWorld"
|
||||
- "helloWorld takes a string parameter named foo"
|
||||
- Okay, I've realized coding is probably not feasible using 100% voice dictation.
|
48
debian.org
Normal file
48
debian.org
Normal file
@ -0,0 +1,48 @@
|
||||
#+title: Debian
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Setup unattended-upgrades
|
||||
Edit ~/etc/apt/apt.conf.d/50unattended-upgrades~. Comment out the following lines.
|
||||
#+BEGIN_SRC perl
|
||||
Unattended-Upgrade::Origins-Pattern {
|
||||
// Codename based matching:
|
||||
// This will follow the migration of a release through different
|
||||
// archives (e.g. from testing to stable and later oldstable).
|
||||
// Software will be the latest available for the named release,
|
||||
// but the Debian release itself will not be automatically upgraded.
|
||||
"origin=Debian,codename=${distro_codename}-updates";
|
||||
"origin=Debian,codename=${distro_codename}-proposed-updates";
|
||||
"origin=Debian,codename=${distro_codename},label=Debian";
|
||||
"origin=Debian,codename=${distro_codename},label=Debian-Security";
|
||||
"origin=Debian,codename=${distro_codename}-security,label=Debian-Security";
|
||||
|
||||
// Archive or Suite based matching:
|
||||
// Note that this will silently match a different release after
|
||||
// migration to the specified archive (e.g. testing becomes the
|
||||
// new stable).
|
||||
// "o=Debian,a=stable";
|
||||
// "o=Debian,a=stable-updates";
|
||||
// "o=Debian,a=proposed-updates";
|
||||
"o=Debian Backports,a=${distro_codename}-backports,l=Debian Backports";
|
||||
};
|
||||
#+END_SRC
|
||||
|
||||
#+BEGIN_SRC perl
|
||||
Unattended-Upgrade::Remove-Unused-Dependencies "true";
|
||||
#+END_SRC
|
||||
|
||||
Issue the command below to enable automatic updates:
|
||||
#+BEGIN_SRC shell
|
||||
sudo dpkg-reconfigure --priority=low unattended-upgrades
|
||||
#+END_SRC
|
||||
|
||||
~/etc/apt/apt.conf.d/20auto-upgrades~ should contain the following:
|
||||
#+BEGIN_SRC perl
|
||||
APT::Periodic::Update-Package-Lists "1";
|
||||
APT::Periodic::Unattended-Upgrade "1";
|
||||
#+END_SRC
|
||||
|
||||
Enable the systemd service:
|
||||
#+BEGIN_SRC shell
|
||||
sudo systemctl enable --now unattended-upgrades.service
|
||||
#+END_SRC
|
15
dietpi.org
Normal file
15
dietpi.org
Normal file
@ -0,0 +1,15 @@
|
||||
#+title: DietPi
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** systemd-logind
|
||||
Install ~libpam-systemd~:
|
||||
#+begin_src shell
|
||||
sudo apt install -y libpam-systemd
|
||||
#+end_src
|
||||
|
||||
Unmask and enable systemd-logind:
|
||||
#+begin_src shell
|
||||
sudo systemctl unmask systemd-logind
|
||||
sudo systemctl enable systemd-logind
|
||||
sudo systemctl reboot
|
||||
#+end_src
|
28
fedora-atomic.org
Normal file
28
fedora-atomic.org
Normal file
@ -0,0 +1,28 @@
|
||||
#+title: Fedora Atomic
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Access USB serial device in container
|
||||
Create a udev rule on the host for all usb-serial devices. Set OWNER to your 1000 user.
|
||||
#+begin_src shell
|
||||
cat << EOF | sudo tee /etc/udev/rules.d/50-usb-serial.rules
|
||||
SUBSYSTEM=="tty", SUBSYSTEMS=="usb-serial", OWNER="jas"
|
||||
EOF
|
||||
#+end_src
|
||||
|
||||
Reload udev.
|
||||
#+begin_src shell
|
||||
sudo udevadm control --reload-rules
|
||||
sudo udevadm trigger
|
||||
#+end_src
|
||||
|
||||
The serial device should now be owned by your user.
|
||||
#+begin_src shell
|
||||
ls -l /dev/ttyUSB0
|
||||
crw-rw----. 1 jas dialout 188, 0 Mar 15 11:09 /dev/ttyUSB0
|
||||
#+end_src
|
||||
|
||||
You can now run minicom inside the toolbox container.
|
||||
#+begin_src shell
|
||||
distrobox enter default
|
||||
minicom -D /dev/ttyUSB0
|
||||
#+end_src
|
32
firewalld.org
Normal file
32
firewalld.org
Normal file
@ -0,0 +1,32 @@
|
||||
#+title: Firewalld
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Allow connections only from certain IP addresses
|
||||
|
||||
Source: [[https://serverfault.com/a/798120][FirewallD: Allow connections only from certain IP addresses]]
|
||||
|
||||
- Do not use rich rules for this.
|
||||
- A firewalld zone corresponds to a set of services that you want to allow, and the sources of the traffic to those services.
|
||||
- Traffic sources can be designated in two ways: by interface, or by source IP address. Traffic that matches /any/ source passes this check.
|
||||
|
||||
Create a new zone for Kali Linux IP addresses:
|
||||
#+begin_src shell
|
||||
sudo firewall-cmd --permanent --new-zone=kali
|
||||
sudo firewall-cmd --reload
|
||||
#+end_src
|
||||
|
||||
Enable the services allow for kali zone:
|
||||
#+begin_src shell
|
||||
sudo firewall-cmd --zone=kali --permanent --add-service=ssh
|
||||
sudo firewall-cmd --zone=kali --permanent --add-service=rsyncd
|
||||
sudo firewall-cmd --reload
|
||||
#+end_src
|
||||
|
||||
Add the IP addresses allowed to reach the above services. Ensure there are no interfaces designated to this zone.
|
||||
#+begin_src shell
|
||||
sudo firewall-cmd --zone=kali --permanent --add-source=<IPv4 addr 1>
|
||||
sudo firewall-cmd --zone=kali --permanent --add-source=<IPv6 addr>
|
||||
sudo firewall-cmd --zone=kali --permanent --add-source=<IPv4 addr 2>
|
||||
sudo firewall-cmd --zone=kali --permanent --add-source=<IPv4 addr 3>
|
||||
sudo firewall-cmd --reload
|
||||
#+end_src
|
32
gitlab.org
Normal file
32
gitlab.org
Normal file
@ -0,0 +1,32 @@
|
||||
#+title: GitLab
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Setup GitLab runner with Podman
|
||||
1. Install [[https://docs.gitlab.com/16.9/runner/install/linux-manually.html][GitLab Runner]]
|
||||
|
||||
2. Create a new runner from the GitLab UI.
|
||||
|
||||
3. Use the authentication token from the GitLab UI to register a new runner on the machine hosting the runner. Select the Docker executor.
|
||||
#+begin_src shell
|
||||
sudo systemctl enable --now gitlab-runner.service
|
||||
sudo gitlab-runner register --url https://git.hyperreal.coffee --token <TOKEN>
|
||||
#+end_src
|
||||
|
||||
4. Add the following lines to ~/etc/gitlab-runner/config.toml~ for Podman:
|
||||
#+begin_src toml
|
||||
[[runners]]
|
||||
environment = ["FF_NETWORK_PER_BUILD=1"]
|
||||
[runners.docker]
|
||||
host = "unix://run/podman/podman.sock"
|
||||
tls_verify = false
|
||||
image = "git.hyperreal.coffee:5050/fedora-atomic/containers/fedora:latest"
|
||||
privileged = true
|
||||
volumes = ["/build-repo", "/cache", "/source-repo"]
|
||||
#+end_src
|
||||
|
||||
5. Restart the gitlab-runner:
|
||||
#+begin_src shell
|
||||
sudo gitlab-runner restart
|
||||
#+end_src
|
||||
|
||||
We should now be ready to use the Podman runner.
|
136
grafana.org
Normal file
136
grafana.org
Normal file
@ -0,0 +1,136 @@
|
||||
#+title: Grafana
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Install and deploy the Grafana server
|
||||
On Fedora/RHEL systems:
|
||||
#+BEGIN_SRC shell
|
||||
sudo dnf install -y grafana grafana-selinux
|
||||
#+END_SRC
|
||||
|
||||
On Debian systems:
|
||||
#+BEGIN_SRC shell
|
||||
sudo apt-get install -y apt-transport-https software-properties-common
|
||||
sudo wget -q -O /usr/share/keyrings/grafana.key https://apt.grafana.com/gpg.key
|
||||
echo "deb [signed-by=/usr/share/keyrings/grafana.key] https://apt.grafana.com stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
|
||||
sudo apt update
|
||||
sudo apt install -y grafana
|
||||
#+END_SRC
|
||||
|
||||
Reload the systemctl daemon, start and enable ~grafana.service~:
|
||||
#+BEGIN_SRC shell
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable --now grafana-server.service
|
||||
sudo systemctl status grafana-server.service
|
||||
#+END_SRC
|
||||
|
||||
** Configure Grafana SELinux policy
|
||||
#+BEGIN_QUOTE
|
||||
This is not necessary on AlmaLinux 9, Rocky Linux 9, RHEL 9.
|
||||
#+END_QUOTE
|
||||
|
||||
For some reason the grafana-selinux package does not provide what Grafana needs to cooperate with SELinux. It's therefore necessary to use a third-party repository at [[https://github.com/georou/grafana-selinux]] to compile and install a proper SELinux policy module for Grafana.
|
||||
#+BEGIN_SRC shell
|
||||
# Clone the repo
|
||||
git clone https://github.com/georou/grafana-selinux.git
|
||||
cd grafana-selinux
|
||||
|
||||
# Copy relevant .if interface file to /usr/share/selinux/devel/include to expose them when building and for future modules.
|
||||
# May need to use full path for grafana.if if not working.
|
||||
install -Dp -m 0664 -o root -g root grafana.if /usr/share/selinux/devel/include/myapplications/grafana.if
|
||||
|
||||
# Compile and install the selinux module.
|
||||
sudo dnf install -y selinux-policy-devel setools-console policycoreutils-devel
|
||||
sudo make -f /usr/share/selinux/devel/Makefile grafana.pp
|
||||
sudo semodule -i grafana.pp
|
||||
|
||||
# Add grafana ports
|
||||
semanage port -a -t grafana_port_t -p tcp 3000
|
||||
|
||||
# Restore all the correct context labels
|
||||
restorecon -RvF /usr/sbin/grafana-* \
|
||||
/etc/grafana \
|
||||
/var/log/grafana \
|
||||
/var/lib/grafana \
|
||||
/usr/share/grafana/bin
|
||||
|
||||
# Start grafana
|
||||
systemctl start grafana-server.service
|
||||
|
||||
# Ensure it's working in the proper confinement
|
||||
ps -eZ | grep grafana
|
||||
#+END_SRC
|
||||
|
||||
Login to the [[http://localhost:3000][Grafana panel]].
|
||||
- username: admin
|
||||
- password: password (change this after)
|
||||
|
||||
** Add Prometheus data source
|
||||
1. Bar menu
|
||||
2. Data sources
|
||||
3. Add new data source
|
||||
4. Choose Prometheus data source
|
||||
- Name: Prometheus
|
||||
- URL: http://localhost:9090
|
||||
5. Save & test
|
||||
|
||||
Ensure the data source is working before continuing.
|
||||
|
||||
If you're running Grafana on an SELinux host, set an SELinux boolean to allow Grafana to access the Prometheus port:
|
||||
#+BEGIN_SRC shell
|
||||
sudo setsebool -P grafana_can_tcp_connect_prometheus_port=1
|
||||
#+END_SRC
|
||||
|
||||
** Add Loki data source
|
||||
Since Loki is running on hyperreal.coffee:3100, the Firewall's internal zone on that host needs to allow connection to port ~3100~ from my IP address.
|
||||
#+BEGIN_SRC shell
|
||||
sudo firewall-cmd --zone=internal --permanent --add-port=3100/tcp
|
||||
sudo firewall-cmd --reload
|
||||
#+END_SRC
|
||||
|
||||
In the Grafana panel:
|
||||
1. Bar menu
|
||||
2. Data sources
|
||||
3. Add new data source
|
||||
4. Choose Loki data source
|
||||
- Name: Loki
|
||||
- URL: http://hyperreal.coffee:3100
|
||||
5. Save & test
|
||||
|
||||
Ensure the data source is working before continuing.
|
||||
|
||||
** Add Node Exporter dashboard
|
||||
:PROPERTIES:
|
||||
:CUSTOM_ID: grafana:node
|
||||
:END:
|
||||
|
||||
1. Visit the [[https://grafana.com/grafana/dashboards/][Grafana Dashboard Library]].
|
||||
2. Search for "Node Exporter Full".
|
||||
3. Copy the ID for Node Exporter Full.
|
||||
4. Go to the Grafana panel bar menu.
|
||||
5. Dashboards
|
||||
6. New > Import
|
||||
7. Paste the Node Exporter Full ID into the field, and press the Load button.
|
||||
|
||||
** Add Caddy dashboard
|
||||
:PROPERTIES:
|
||||
:CUSTOM_ID: grafana:caddy
|
||||
:END:
|
||||
|
||||
1. Visit [[https://grafana.com/grafana/dashboards/20802-caddy-monitoring/][Caddy Monitoring]] on the Grafana Dashboard Library.
|
||||
2. Copy the ID to clipboard.
|
||||
3. Go to the Grafana panel bar menu.
|
||||
4. Dashboards
|
||||
5. New > Import
|
||||
6. Paste the Caddy Monitoring ID into the field, and press the Load button.
|
||||
|
||||
** Add qBittorrent dashboard
|
||||
:PROPERTIES:
|
||||
:CUSTOM_ID: grafana:qbittorrent
|
||||
:END:
|
||||
|
||||
1. Visit [[https://grafana.com/grafana/dashboards/15116-qbittorrent-dashboard/][qBittorrent Dashboard]] on Grafana Dashboard Library.
|
||||
2. Copy the ID to clipboard.
|
||||
3. Go to the Grafana panel bar menu.
|
||||
4. Dashboards
|
||||
5. New > Import
|
||||
6. Paste the qBittorrent Dashboard ID into the field, and press the Load button.
|
39
index.org
Normal file
39
index.org
Normal file
@ -0,0 +1,39 @@
|
||||
#+SETUPFILE: ../org-templates/page.org
|
||||
|
||||
This is a collection of notes on tech and other things. Intended audience: mostly me.
|
||||
|
||||
- [[file:aide.org][AIDE]]
|
||||
- [[file:atop.org][Atop]]
|
||||
- [[file:bash.org][Bash]]
|
||||
- [[file:btrbk.org][Btrbk]]
|
||||
- [[file:btrfs.org][Btrfs]]
|
||||
- [[file:caddy.org][Caddy]]
|
||||
- [[file:cts-accessibility.org][Carpal tunnel syndrome]]
|
||||
- [[file:cgit.org][Cgit]]
|
||||
- [[file:debian.org][Debian]]
|
||||
- [[file:dietpi.org][DietPi]]
|
||||
- [[file:fedora-atomic.org][Fedora Atomic]]
|
||||
- [[file:firewalld.org][Firewalld]]
|
||||
- [[file:gitlab.org][GitLab]]
|
||||
- [[file:grafana.org][Grafana]]
|
||||
- [[file:internet-archive.org][Internet Archive]]
|
||||
- [[file:kernel.org][Kernel]]
|
||||
- [[file:lvm2.org][LVM2]]
|
||||
- [[file:mastodon.org][Mastodon]]
|
||||
- [[file:windows.org][Microsoft Windows]]
|
||||
- [[file:nfs.org][NFS]]
|
||||
- [[file:networking.org][Networking]]
|
||||
- [[file:nextcloud.org][Nextcloud]]
|
||||
- [[file:openssl.org][OpenSSL]]
|
||||
- [[file:packet-tracer.org][Packet Tracer]]
|
||||
- [[file:parallel.org][Parallel]]
|
||||
- [[file:postgresql.org][PostgreSQL]]
|
||||
- [[file:prometheus.org][Prometheus]]
|
||||
- [[file:qcow2.org][QCOW2]]
|
||||
- [[file:qemu.org][QEMU]]
|
||||
- [[file:raid.org][RAID]]
|
||||
- [[file:re-hd.org][Resident Evil HD]]
|
||||
- [[file:retropie.org][RetroPie]]
|
||||
- [[file:systemd.org][Systemd]]
|
||||
- [[file:voidlinux.org][Void Linux]]
|
||||
- [[file:zfs.org][ZFS]]
|
34
internet-archive.org
Normal file
34
internet-archive.org
Normal file
@ -0,0 +1,34 @@
|
||||
#+title: Internet Archive
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Install Python command line client
|
||||
#+BEGIN_SRC bash
|
||||
pipx install internetarchive
|
||||
#+END_SRC
|
||||
|
||||
** Use Python client to download torrent files from given collection
|
||||
Ensure "Automatically add torrents from" > Monitored Folder is set to ~/mnt/torrent_files~ and the Override save path is Default save path.
|
||||
|
||||
*** Get itemlist from collection
|
||||
#+BEGIN_SRC bash
|
||||
ia search --itemlist "collection:bbsmagazine" | tee bbsmagazine.txt
|
||||
#+END_SRC
|
||||
|
||||
*** Download torrent files from each item using parallel
|
||||
#+BEGIN_SRC bash
|
||||
cat bbsmagazine.txt | parallel 'ia download --format "Archive BitTorrent" --destdir=/mnt/torrent_files {}'
|
||||
#+END_SRC
|
||||
|
||||
*** Move .torrent files from their directories to ~/mnt/torrent_files~
|
||||
#+BEGIN_SRC bash
|
||||
find /mnt/torrent_files -type f -name "*.torrent" -exec mv {} /mnt/torrent_files \;
|
||||
#+END_SRC
|
||||
|
||||
#+BEGIN_QUOTE
|
||||
Note: .torrent files will be removed from ~/mnt/torrent_files~ by qBittorrent once they are added to the instance.
|
||||
#+END_QUOTE
|
||||
|
||||
*** Remove empty directories
|
||||
#+BEGIN_SRC bash
|
||||
find /mnt/torrent_files -maxdepth 1 -mindepth 1 -type d -delete
|
||||
#+END_SRC
|
51
kernel.org
Normal file
51
kernel.org
Normal file
@ -0,0 +1,51 @@
|
||||
#+title: Kernel
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Disable core dumps in Linux
|
||||
|
||||
*** limits.conf and sysctl
|
||||
Edit ~/etc/security/limits.conf~ and append the following lines:
|
||||
#+BEGIN_SRC bash
|
||||
* hard core 0
|
||||
* soft core 0
|
||||
#+END_SRC
|
||||
|
||||
Edit ~/etc/sysctl.d/9999-disable-core-dump.conf~:
|
||||
#+BEGIN_SRC bash
|
||||
fs.suid_dumpable=0
|
||||
kernel.core_pattern=|/bin/false
|
||||
#+END_SRC
|
||||
|
||||
#+BEGIN_SRC bash
|
||||
sudo sysctl -p /etc/sysctl.d/9999-disable-core-dump.conf
|
||||
#+END_SRC
|
||||
|
||||
- ~/bin/false~ exits with a failure status code. The default value for ~kernel.core_pattern~ is ~core~ on a Debian server and ~|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h~ on a Fedora desktop. These commands are executed upon crashes. In the case of ~/bin/false~, nothing happens, and core dump is disabled.
|
||||
- ~fs.suid_dumpable=0~ Any process that has changed privilege levels or is execute only will not be dumped. Other values include ~1~, which is debug mode, and all processes dump core when possible. The current user owns the core dump, no security is applied. ~2~, suidsafe mode, in which any Linux program that would generally not be dumped is dumped regardless, but only if the ~kernel.core_pattern~ in sysctl is set to a valid program.
|
||||
|
||||
*** systemd
|
||||
#+BEGIN_SRC bash
|
||||
sudo mkdir /etc/systemd/coredump.conf.d/
|
||||
sudo nvim /etc/systemd/coredump.conf.d/custom.conf
|
||||
#+END_SRC
|
||||
|
||||
#+BEGIN_SRC systemd
|
||||
[Coredump]
|
||||
Storage=none
|
||||
ProcessSizeMax=0
|
||||
#+END_SRC
|
||||
|
||||
- ~Storage=none~ and ~ProcessSizeMax=0~ disables all coredump handling except for a log entry under systemd.
|
||||
|
||||
#+BEGIN_SRC bash
|
||||
sudo systemctl daemon-reload
|
||||
#+END_SRC
|
||||
|
||||
Edit ~/etc/systemd/system.conf~. Make sure ~DefaultLimitCORE~ is commented out.
|
||||
#+BEGIN_SRC systemd
|
||||
#DefaultLimitCORE=infinity
|
||||
#+END_SRC
|
||||
|
||||
#+BEGIN_SRC bash
|
||||
sudo systemctl daemon-reexec
|
||||
#+END_SRC
|
28
lvm2.org
Normal file
28
lvm2.org
Normal file
@ -0,0 +1,28 @@
|
||||
#+title: LVM2
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Add disk to LVM volume
|
||||
Create a new physical volume on the new disk:
|
||||
#+begin_src shell
|
||||
sudo pvcreate /dev/vdb
|
||||
sudo lvmdiskscan -l
|
||||
#+end_src
|
||||
|
||||
Add the newly created physical volume (~/dev/vdb~) to an existing logical volume:
|
||||
#+begin_src shell
|
||||
sudo vgextend almalinux /dev/vdb
|
||||
#+end_src
|
||||
|
||||
Extend the ~/dev/almalinux/root~ to create a total 1000GB:
|
||||
#+begin_src shell
|
||||
sudo lvm lvextend -l +100%FREE /dev/almalinux/root
|
||||
#+end_src
|
||||
|
||||
Grow the filesystem of the root volume:
|
||||
#+begin_src shell
|
||||
# ext4
|
||||
sudo resize2fs -p /dev/mapper/almalinux-root
|
||||
|
||||
# xfs
|
||||
sudo xfs_growfs /
|
||||
#+end_src
|
162
mastodon.org
Normal file
162
mastodon.org
Normal file
@ -0,0 +1,162 @@
|
||||
#+title: Mastodon
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Full-text search with elasticsearch
|
||||
*** Install ElasticSearch
|
||||
#+begin_src shell
|
||||
sudo apt install -y openjdk-17-jre-headless
|
||||
|
||||
wget -O /usr/share/keyrings/elasticsearch.asc https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
|
||||
echo "deb [signed-by=/usr/share/keyrings/elasticsearch.asc] https://artifacts.elastic.co/packages/7.x/apt stable main" > /etc/apt/sources.list.d/elastic-7.x.list
|
||||
|
||||
sudo apt update
|
||||
|
||||
sudo apt install -y elasticsearch
|
||||
#+end_src
|
||||
|
||||
*** Edit ~/etc/elasticsearch/elasticsearch.yaml~
|
||||
#+begin_src yaml
|
||||
xpack.security.enabled: true
|
||||
discovery.type: single-node
|
||||
#+end_src
|
||||
|
||||
*** Create passwords for built-in users
|
||||
#+begin_src shell
|
||||
sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch
|
||||
#+end_src
|
||||
|
||||
In a separate shell:
|
||||
#+begin_src shell
|
||||
sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto
|
||||
#+end_src
|
||||
|
||||
Copy the generated password for the ~elastic~ user.
|
||||
|
||||
*** Create custom role for Mastodon to connect
|
||||
As the mastodon user on the host:
|
||||
#+begin_src shell
|
||||
curl -X POST -u elastic:admin_password "localhost:9200/_security/role/mastodon_full_access?pretty" -H 'Content-Type: application/json' -d'
|
||||
{
|
||||
"cluster": ["monitor"],
|
||||
"indices": [{
|
||||
"names": ["*"],
|
||||
"privileges": ["read", "monitor", "write", "manage"]
|
||||
}]
|
||||
}
|
||||
'
|
||||
#+end_src
|
||||
|
||||
*** Create a user for Mastodon and assign it the custom role
|
||||
#+begin_src shell
|
||||
curl -X POST -u elastic:admin_password "localhost:9200/_security/user/mastodon?pretty" -H 'Content-Type: application/json' -d'
|
||||
{
|
||||
"password": "l0ng-r4nd0m-p@ssw0rd",
|
||||
"roles": ["mastodon_full_access"]
|
||||
}
|
||||
'
|
||||
#+end_src
|
||||
|
||||
*** Edit .env.production
|
||||
#+begin_src shell
|
||||
ES_ENABLED=true
|
||||
ES_HOST=localhost
|
||||
ES_PORT=9200
|
||||
ES_PRESET=single_node_cluster
|
||||
ES_USER=mastodon
|
||||
ES_PASS=l0ng-r4ndom-p@ssw0rd
|
||||
#+end_src
|
||||
|
||||
*** Populate the indices
|
||||
#+begin_src shell
|
||||
systemctl restart mastodon-sidekiq
|
||||
systemctl reload mastodon-web
|
||||
su - mastodon
|
||||
cd live
|
||||
RAILS_ENV=production bin/tootctl search deploy
|
||||
#+end_src
|
||||
|
||||
** S3-compatible object storage with Minio
|
||||
1. Install MinIO
|
||||
2. Set the region for this instance to ~homelab~
|
||||
3. Create 'mastodata' bucket
|
||||
4. Setup Tailscale
|
||||
|
||||
Minio API endpoint: tailnet_ip_addr:9000
|
||||
|
||||
*** Caddy reverse proxy config
|
||||
#+begin_quote
|
||||
Ensure DNS resolves for assets.hyperreal.coffee
|
||||
#+end_quote
|
||||
|
||||
#+begin_src caddy
|
||||
assets.hyperreal.coffee {
|
||||
rewrite * /mastodata{path}
|
||||
reverse_proxy http://<tailnet_ip_addr>:9000 {
|
||||
header_up Host {upstream_hostport}
|
||||
}
|
||||
}
|
||||
|
||||
fedi.hyperreal.coffee {
|
||||
@local {
|
||||
file
|
||||
not path /
|
||||
}
|
||||
@local_media {
|
||||
path_regexp /system/(.*)
|
||||
}
|
||||
|
||||
redir @local_media https://assets.hyperreal.coffee/{http.regexp.1} permanent
|
||||
|
||||
...remainer of config
|
||||
}
|
||||
#+end_src
|
||||
|
||||
*** Set custom policy on mastodata bucket
|
||||
#+begin_src json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Principal": {
|
||||
"AWS": "*"
|
||||
},
|
||||
"Action": "s3:GetObject",
|
||||
"Resource": "arn:aws:s3:::mastodata/*"
|
||||
}
|
||||
]
|
||||
}
|
||||
#+end_src
|
||||
|
||||
*** Create mastodon-readwrite policy
|
||||
#+begin_src json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": "s3:*",
|
||||
"Resource": "arn:aws:s3:::mastodata/*"
|
||||
}
|
||||
]
|
||||
}
|
||||
#+end_src
|
||||
|
||||
*** Setup .env.production
|
||||
#+begin_src shell
|
||||
S3_ENABLED=true
|
||||
S3_BUCKET=mastodata
|
||||
AWS_ACCESS_KEY=<access key>
|
||||
AWS_SECRET_ACCESS_KEY=<secret access key>
|
||||
S3_REGION=homelab
|
||||
S3_PROTOCOL=http
|
||||
S3_ENDPOINT=http://<tailnet_ip_addr>:9000
|
||||
S3_FORCE_SINGLE_REQUEST=true
|
||||
S3_ALIAS_HOST=assets.hyperreal.coffee
|
||||
#+end_src
|
||||
|
||||
*** Restart Caddy and Mastodon services
|
||||
#+begin_src shell
|
||||
sudo systemctl restart caddy.service mastodon-web.service mastodon-streaming.service mastodon-sidekiq.service
|
||||
#+end_src
|
15
networking.org
Normal file
15
networking.org
Normal file
@ -0,0 +1,15 @@
|
||||
#+TITLE: Networking
|
||||
#+SETUPFILE: ../org-templates/page.org
|
||||
|
||||
** Disable IPv6 (Debian)
|
||||
Edit ~/etc/sysctl.conf~.
|
||||
#+BEGIN_SRC
|
||||
net.ipv6.conf.all.disable_ipv6 = 1
|
||||
net.ipv6.conf.default.disable_ipv6 = 1
|
||||
net.ipv6.conf.lo.disable_ipv6 = 1
|
||||
#+END_SRC
|
||||
|
||||
Apply the changes.
|
||||
#+BEGIN_SRC shell
|
||||
sudo sysctl -p
|
||||
#+END_SRC
|
77
nextcloud.org
Normal file
77
nextcloud.org
Normal file
@ -0,0 +1,77 @@
|
||||
#+title: Nextcloud
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Migrating
|
||||
*** Backup: Run these commands on old server machine
|
||||
#+begin_quote
|
||||
Assumes Nextcloud instance is installed on DietPi
|
||||
#+end_quote
|
||||
|
||||
#+begin_src shell
|
||||
sudo systemctl stop nginx.service
|
||||
#+end_src
|
||||
|
||||
Put Nextcloud into maintenance mode:
|
||||
#+begin_src shell
|
||||
cd /var/www/nextcloud
|
||||
sudo -u www-data php occ maintenance:mode --on
|
||||
#+end_src
|
||||
|
||||
Backup the directories:
|
||||
#+begin_src shell
|
||||
DATE=$(date '+%Y%m%d')
|
||||
sudo rsync -aAX /etc/nginx /home/dietpi/nginx-backup_$DATE
|
||||
sudo rsync -aAX /var/www/nextcloud /home/dietpi/nextcloud-dir-backup_$DATE
|
||||
sudo rsync -aAX /mnt/dietpi_userdata/nextcloud_data /home/dietpi/nextcloud-data-backup_$DATE
|
||||
#+end_src
|
||||
|
||||
Dump the MariaDB database:
|
||||
#+begin_src shell
|
||||
sudo mysqldump --single-transaction --default-character-set=utf8mb4 -h localhost -u <username> -p <password> nextcloud > /home/dietpi/nextcloud-db-backup_$DATE.sql
|
||||
#+end_src
|
||||
|
||||
Rsync the files over to the new server machine:
|
||||
#+begin_src shell
|
||||
sudo rsync -aAX \
|
||||
/home/dietpi/nginx-backup_$DATE \
|
||||
/home/dietpi/nextcloud-dir-backup_$DATE \
|
||||
/home/dietpi/nextcloud-data-backup_$DATE \
|
||||
/home/dietpi/nextcloud-db-backup_$DATE.sql \
|
||||
dietpi@<new server ip>:/home/dietpi
|
||||
#+end_src
|
||||
|
||||
*** Restore: Run these commands on new server machine
|
||||
Assuming the web server is stopped.
|
||||
|
||||
Move the nextcloud-dir and nextcloud-data directories to their correct locations. First ensure the default directories are removed.
|
||||
#+begin_src shell
|
||||
sudo rm -rf /etc/nginx
|
||||
sudo rm -rf /var/www/nextcloud
|
||||
sudo rm -rf /mnt/dietpi_userdata/nextcloud_data
|
||||
sudo mv nginx_$DATE /etc/nginx
|
||||
sudo mv nextcloud-dir-backup_$DATE /var/www/nextcloud
|
||||
sudo mv nextcloud-data-backup_$DATE /mnt/dietpi_userdata/nextcloud_data
|
||||
sudo chown -R dietpi:dietpi /mnt/dietpi_userdata/nextcloud_data
|
||||
sudo chown -R root:root /etc/nginx
|
||||
#+end_src
|
||||
|
||||
Create the nextcloud database in MariaDB:
|
||||
#+begin_src shell
|
||||
sudo mysql -h localhost -u root -p <password> -e "DROP DATABASE nextcloud"
|
||||
sudo mysql -h localhost -u root -p <password> -e "CREATE DATABASE nextcloud CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci"
|
||||
sudo mysql -h localhost -u root -p <password> nextcloud < /home/dietpi/nextcloud-db-backup_$DATE.sql
|
||||
#+end_src
|
||||
|
||||
Take Nextcloud out of maintenance mode:
|
||||
#+begin_quote
|
||||
You may have to change the 'oc_admin' database user password for occ commands to work.
|
||||
#+end_quote
|
||||
|
||||
#+begin_src shell
|
||||
sudo -u www-data php occ maintenance:mode --off
|
||||
#+end_src
|
||||
|
||||
Restart the services:
|
||||
#+begin_src shell
|
||||
sudo systemctl restart nginx mariadb redis-server php8.2-fpm
|
||||
#+end_src
|
131
nfs.org
Normal file
131
nfs.org
Normal file
@ -0,0 +1,131 @@
|
||||
#+title: NFS
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Setup NFS server on Debian
|
||||
#+begin_src shell
|
||||
sudo apt install -y nfs-kernel-server nfs-common
|
||||
#+end_src
|
||||
|
||||
Configure NFSv4 in ~/etc/default/nfs-common~:
|
||||
#+begin_src shell
|
||||
NEED_STATD="no"
|
||||
NEED_IDMAPD="yes"
|
||||
#+end_src
|
||||
|
||||
Configure NFSv4 in ~/etc/default/nfs-kernel-server~. Disable NFSv2 and NFSv3.
|
||||
#+begin_src shell
|
||||
RPCNFSDOPTS="-N 2 -N 3"
|
||||
RPCMOUNTDOPTS="--manage-gids -N 2 -N 3"
|
||||
#+end_src
|
||||
|
||||
#+begin_src shell
|
||||
sudo systemctl restart nfs-server
|
||||
#+end_src
|
||||
|
||||
Configure FirewallD:
|
||||
#+begin_src shell
|
||||
sudo firewall-cmd --zone=public --permanent --add-service=nfs
|
||||
sudo firewall-cmd --reload
|
||||
#+end_src
|
||||
|
||||
Setup pseudo filesystem and exports:
|
||||
#+begin_src shell
|
||||
sudo mkdir /shared
|
||||
sudo chown -R nobody:nogroup /shared
|
||||
#+end_src
|
||||
|
||||
Add exported directory to ~/etc/exports~:
|
||||
#+begin_src shell
|
||||
/shared <ip address of client>(rw,no_root_squash,no_subtree_check,crossmnt,fsid=0)
|
||||
#+end_src
|
||||
|
||||
Create the NFS table:
|
||||
#+begin_src shell
|
||||
sudo exportfs -a
|
||||
#+end_src
|
||||
|
||||
** Setup NFS client on Debian
|
||||
#+begin_src shell
|
||||
sudo apt install -y nfs-common
|
||||
#+end_src
|
||||
|
||||
Create shared directory:
|
||||
#+begin_src shell
|
||||
sudo mkdir -p /mnt/shared
|
||||
#+end_src
|
||||
|
||||
Mount NFS exports:
|
||||
#+begin_src shell
|
||||
sudo mount.nfs4 <ip address of server>:/ /mnt/shared
|
||||
#+end_src
|
||||
|
||||
#+begin_quote
|
||||
Note that ~<server ip>:/~ is relative to the exported directory. So ~/mnt/shared~ on the client is ~/shared~ on the server. If you try to mount with ~mount -t nfs <server ip>:/shared /mnt/shared~ you will get a /no such file or directory/ error.
|
||||
#+end_quote
|
||||
|
||||
~/etc/fstab~ entry:
|
||||
#+begin_src shell
|
||||
<ip address of server>:/ /mnt/shared nfs4 soft,intr,rsize=8192,wsize=8192
|
||||
#+end_src
|
||||
|
||||
#+begin_src shell
|
||||
sudo systemctl daemon-reload
|
||||
sudo mount -av
|
||||
#+end_src
|
||||
|
||||
** Setup NFS server on FreeBSD
|
||||
Edit ~/etc/rc.conf~.
|
||||
#+BEGIN_SRC shell
|
||||
nfs_server_enable="YES"
|
||||
nfs_server_flags="-u -t -n 4"
|
||||
rpcbind_enable="YES"
|
||||
mountd_flags="-r"
|
||||
mountd_enable="YES"
|
||||
#+END_SRC
|
||||
|
||||
Edit ~/etc/exports~.
|
||||
#+BEGIN_SRC
|
||||
/data1 -alldirs -mapall=user1 host1 host2 host3
|
||||
/data2 -alldirs -maproot=user2 host2
|
||||
#+END_SRC
|
||||
|
||||
Start the services.
|
||||
#+BEGIN_SRC shell
|
||||
sudo service rpcbind start
|
||||
sudo service nfsd start
|
||||
sudo service mountd start
|
||||
#+END_SRC
|
||||
|
||||
After making changes to the exports file, you need to restart NFS for the changes to take effect:
|
||||
#+BEGIN_SRC shell
|
||||
kill -HUP `cat /var/run/mountd.pid`
|
||||
#+END_SRC
|
||||
|
||||
** Setup NFS client on FreeBSD
|
||||
Edit ~/etc/rc.conf~.
|
||||
#+BEGIN_SRC shell
|
||||
nfs_client_enable="YES"
|
||||
nfs_client_flags="-n 4"
|
||||
rpc_lockd_enable="YES"
|
||||
rpc_statd_enable="YES"
|
||||
#+END_SRC
|
||||
|
||||
** Mount NFS share on client with systemd
|
||||
Create a file at ~/etc/systemd/system/mnt-backup.mount~.
|
||||
#+BEGIN_SRC systemd
|
||||
[Unit]
|
||||
Description=borgbackup NFS share from FreeBSD
|
||||
DefaultDependencies=no
|
||||
Conflicts=umount.target
|
||||
After=network-online.target remote-fs.target
|
||||
Before=umount.target
|
||||
|
||||
[Mount]
|
||||
What=10.0.0.119:/coffeeNAS/borgbackup/repositories
|
||||
Where=/mnt/backup
|
||||
Type=nfs
|
||||
Options=defaults,vers=3
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
#+END_SRC
|
61
openssl.org
Normal file
61
openssl.org
Normal file
@ -0,0 +1,61 @@
|
||||
#+title: OpenSSL
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Certificate and CA for HTTPS
|
||||
*** Self-signed certificate
|
||||
To generate a self-signed certificate:
|
||||
#+begin_src shell
|
||||
openssl req \
|
||||
-newkey rsa:4096 \
|
||||
-x509 \
|
||||
-sha256 \
|
||||
-days 3650 \
|
||||
-noenc \
|
||||
-out coffeeNET.crt \
|
||||
-keyout coffeeNET.key \
|
||||
-subj "/C=US/ST=Illinois/L=Chicago/O=coffeeNET/OU=Homelab/CN=lab.home.arpa"
|
||||
#+end_src
|
||||
|
||||
What these options mean:
|
||||
| Option | Description |
|
||||
|-------------------------+----------------------------------------------------------------------------------------------------------------|
|
||||
| ~-newkey rsa:4096~ | Generates a new certificate request and a 4096-bit RSA key. The default is 2048 is you don't specify. |
|
||||
| ~-x509~ | Specifies that you want to create a self-signed certificate rather than a certificate signing request. |
|
||||
| ~-sha256~ | Uses the 256-bit SHA (Secure Hash Algorithm) for the certificate. |
|
||||
| ~-days 3650~ | Sets the validity of the certificate to 3650 days (10 years), but you can adjust this to any positive integer. |
|
||||
| ~-noenc~ | Creates the certificate without a passphrase. Stands for "no encryption". |
|
||||
| ~-out coffeeNET.crt~ | Outputs the certificate to a file named ~coffeeNET.crt~. |
|
||||
| ~-keyout coffeeNET.key~ | Outputs the private key to a file named ~coffeeNET.key~. |
|
||||
| ~-subj~ | Provides subject information about the certificate. See below. |
|
||||
|
||||
Subject information:
|
||||
| Option | Description |
|
||||
|---------------------+----------------------------------------------------------------------------------|
|
||||
| ~/C=US~ | Country code |
|
||||
| ~/ST=Illinois~ | State |
|
||||
| ~/L=Chicago~ | Locality/city |
|
||||
| ~/O=coffeeNET~ | Organization name |
|
||||
| ~/OU=Homelab~ | Organizational unit |
|
||||
| ~/CN=lab.home.arpa~ | Common name, which is often the fully-qualified domain name for the certificate. |
|
||||
|
||||
*** Certificate Authority
|
||||
Create a private key for the CA. This key should be encrypted with AES for security reasons, and you should use a strong password of 20+ characters.
|
||||
|
||||
#+begin_src shell
|
||||
openssl req \
|
||||
-x509 \
|
||||
-new \
|
||||
-key coffeeNET-RootCA.key \
|
||||
-sha256 \
|
||||
-days 1826 \
|
||||
-out coffeeNET-RootCA.crt \
|
||||
-subj "/C=US/ST=Illinois/L=Chicago/O=coffeeNET/OU=Homelab/CN=lab.home.arpa"
|
||||
#+end_src
|
||||
|
||||
Add the CA certificate to the trusted root certificates on clients:
|
||||
#+begin_src shell
|
||||
sudo cp coffeeNET-RootCA.crt /etc/pki/ca-trust/source/anchors/
|
||||
sudo update-ca-trust
|
||||
#+end_src
|
||||
|
||||
These steps establish your own CA, after which you can sign certificates with it to be recognized as valid within your network.
|
17
packet-tracer.org
Normal file
17
packet-tracer.org
Normal file
@ -0,0 +1,17 @@
|
||||
#+title: Packet Tracer
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Fix GUI issues with KDE Plasma dark theme
|
||||
#+BEGIN_SRC shell
|
||||
mkdir ~/.config-pt
|
||||
cd ~/.config
|
||||
cp -rf dconf gtk-3.0 gtk-4.0 xsettingsd ~/.config-pt
|
||||
#+END_SRC
|
||||
|
||||
1. Right-click on Menu button.
|
||||
2. Click Edit Applications.
|
||||
3. Select Packet Tracer.
|
||||
4. Add ~XDG_CONFIG_HOME=/home/jas/.config-pt~ to Environment variables.
|
||||
5. Save.
|
||||
|
||||
[[https://redlib.nirn.quest/r/kde/comments/lcm2n3/how_to_make_specific_application_ignore_system/][Source]]. Thanks, [[https://redlib.nirn.quest/user/AtomHeartSon][u/AtomHeartSon]]!
|
24
parallel.org
Normal file
24
parallel.org
Normal file
@ -0,0 +1,24 @@
|
||||
#+title: Parallel
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Pulling files from remote server with rsync
|
||||
To transfer just the files:
|
||||
#+BEGIN_SRC bash
|
||||
ssh user@remote -- find /path/to/parent/directory -type f | parallel -v -j16 rsync -Havessh -aAXP user@remote:{} /local/path
|
||||
#+END_SRC
|
||||
|
||||
To transfer the entire directory:
|
||||
#+BEGIN_SRC bash
|
||||
echo "/path/to/parent/directory" | parallel -v -j16 rsync -Havessh -aAXP user@remote:{} /local/path
|
||||
#+END_SRC
|
||||
|
||||
** Pushing files to remote server with rsync
|
||||
To transfer just the files:
|
||||
#+BEGIN_SRC bash
|
||||
find /path/to/local/directory -type f | parallel -v -j16 -X rsync -aAXP /path/to/local/directory/{} user@remote:/path/to/dest/dir
|
||||
#+END_SRC
|
||||
|
||||
** Running the same command on multiple remote hosts
|
||||
#+BEGIN_SRC bash
|
||||
parallel --tag --nonall -S remote0,remote1,remote2 uptime
|
||||
#+END_SRC
|
45
postgresql.org
Normal file
45
postgresql.org
Normal file
@ -0,0 +1,45 @@
|
||||
#+title: PostgreSQL
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Change password for user
|
||||
#+begin_src shell
|
||||
sudo -u user_name psql db_name
|
||||
#+end_src
|
||||
|
||||
#+begin_src sql
|
||||
ALTER USER user_name WITH PASSWORD 'new_password';
|
||||
#+end_src
|
||||
|
||||
** Update password auth method to SCRAM
|
||||
Edit ~/etc/postgresql/16/main/postgresql.conf~:
|
||||
#+BEGIN_SRC shell
|
||||
password_encryption = scram-sha-256
|
||||
#+END_SRC
|
||||
|
||||
Restart postgresql.service:
|
||||
#+BEGIN_SRC shell
|
||||
sudo systemctl restart postgresql.service
|
||||
#+END_SRC
|
||||
|
||||
At this point, any services using the old MD5 auth method will fail to connect to their PostgreSQL databases.
|
||||
|
||||
Update the settings in ~/etc/postgresql/16/main/pg_hba.conf~:
|
||||
#+BEGIN_SRC shell
|
||||
TYPE DATABASE USER ADDRESS METHOD
|
||||
local all mastodon scram-sha-256
|
||||
local all synapse_user scram-sha-256
|
||||
#+END_SRC
|
||||
|
||||
Enter a psql shell and determine who needs to upgrade their auth method:
|
||||
#+BEGIN_SRC sql
|
||||
SELECT rolname, rolpassword ~ '^SCRAM-SHA-256\$' AS has_upgraded FROM pg_authid WHERE rolcanlogin;
|
||||
|
||||
\password username
|
||||
#+END_SRC
|
||||
|
||||
Restart postgresql.service and all services using a PostgreSQL database:
|
||||
#+BEGIN_SRC shell
|
||||
sudo systemctl restart postgresql.service
|
||||
sudo systemctl restart mastodon-web.service mastodon-sidekiq.service mastodon-streaming.service
|
||||
sudo systemctl restart matrix-synapse.service
|
||||
#+END_SRC
|
502
prometheus.org
Normal file
502
prometheus.org
Normal file
@ -0,0 +1,502 @@
|
||||
#+title: Prometheus
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Download and install
|
||||
Go to [[https://prometheus.io/download/]] and download the latest version.
|
||||
#+BEGIN_SRC shell
|
||||
export PROM_VER="2.54.0"
|
||||
wget "https://github.com/prometheus/prometheus/releases/download/v${PROM_VER}/prometheus-${PROM_VER}.linux-amd64.tar.gz"
|
||||
#+END_SRC
|
||||
|
||||
Verify the checksum is correct.
|
||||
|
||||
Unpack the tarball:
|
||||
#+BEGIN_SRC shell
|
||||
tar xvfz prometheus-*.tar.gz
|
||||
rm prometheus-*.tar.gz
|
||||
#+END_SRC
|
||||
|
||||
Create two directories for Prometheus to use. ~/etc/prometheus~ for configuration files and ~/var/lib/prometheus~ for application data.
|
||||
#+BEGIN_SRC shell
|
||||
sudo mkdir /etc/prometheus /var/lib/prometheus
|
||||
#+END_SRC
|
||||
|
||||
Move the ~prometheus~ and ~promtool~ binaries to ~/usr/local/bin~:
|
||||
#+BEGIN_SRC shell
|
||||
cd prometheus-*
|
||||
sudo mv prometheus promtool /usr/local/bin
|
||||
#+END_SRC
|
||||
|
||||
Move the configuration file to the configuration directory:
|
||||
#+BEGIN_SRC shell
|
||||
sudo mv prometheus.yml /etc/prometheus/prometheus.yml
|
||||
#+END_SRC
|
||||
|
||||
Move the remaining files to their appropriate directories:
|
||||
#+BEGIN_SRC shell
|
||||
sudo mv consoles/ console_libraries/ /etc/prometheus/
|
||||
#+END_SRC
|
||||
|
||||
Verify that Prometheus is installed:
|
||||
#+BEGIN_SRC shell
|
||||
prometheus --version
|
||||
#+END_SRC
|
||||
|
||||
** Configure prometheus.service
|
||||
Create a prometheus user and assign ownership to directories:
|
||||
#+BEGIN_SRC shell
|
||||
sudo useradd -rs /bin/false prometheus
|
||||
sudo chown -R prometheus: /etc/prometheus /var/lib/prometheus
|
||||
#+END_SRC
|
||||
|
||||
Save the following contents to a file at ~/etc/systemd/system/prometheus.service~:
|
||||
#+BEGIN_SRC shell
|
||||
[Unit]
|
||||
Description=Prometheus
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
|
||||
[Service]
|
||||
User=prometheus
|
||||
Group=prometheus
|
||||
Type=simple
|
||||
Restart=on-failure
|
||||
RestartSec=5s
|
||||
ExecStart=/usr/local/bin/prometheus \
|
||||
--config.file /etc/prometheus/prometheus.yml \
|
||||
--storage.tsdb.path /var/lib/prometheus/ \
|
||||
--web.console.templates=/etc/prometheus/consoles \
|
||||
--web.console.libraries=/etc/prometheus/console_libraries \
|
||||
--web.listen-address=0.0.0.0:9090 \
|
||||
--web.enable-lifecycle \
|
||||
--log.level=info
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
#+END_SRC
|
||||
|
||||
Reload the system daemons:
|
||||
#+BEGIN_SRC shell
|
||||
sudo systemctl daemon-reload
|
||||
#+END_SRC
|
||||
|
||||
Start and enable ~prometheus.service~:
|
||||
#+BEGIN_SRC shell
|
||||
sudo systemctl enable --now prometheus.service
|
||||
#+END_SRC
|
||||
|
||||
#+BEGIN_QUOTE
|
||||
For systems running SELinux, the following policy settings must be applied.
|
||||
#+END_QUOTE
|
||||
|
||||
#+BEGIN_SRC selinux
|
||||
module prometheus 1.0;
|
||||
|
||||
require {
|
||||
type init_t;
|
||||
type websm_port_t;
|
||||
type user_home_t;
|
||||
type unreserved_port_t;
|
||||
type hplip_port_t;
|
||||
class file { execute execute_no_trans map open read };
|
||||
class tcp_socket name_connect;
|
||||
}
|
||||
|
||||
#============= init_t ==============
|
||||
allow init_t hplip_port_t:tcp_socket name_connect;
|
||||
allow init_t unreserved_port_t:tcp_socket name_connect;
|
||||
allow init_t user_home_t:file { execute execute_no_trans map open read };
|
||||
allow init_t websm_port_t:tcp_socket name_connect;
|
||||
#+END_SRC
|
||||
|
||||
Now compile and import the module:
|
||||
#+BEGIN_SRC shell
|
||||
sudo checkmodule -M -m -o prometheus.mod prometheus.te
|
||||
sudo semodule_package -o prometheus.pp -m prometheus.mod
|
||||
sudo semodule -i prometheus.pp
|
||||
#+END_SRC
|
||||
|
||||
Restart ~prometheus.service~. If it does not start, ensure all SELinux policies have been applied.
|
||||
#+BEGIN_SRC shell
|
||||
sudo grep "prometheus" /var/log/audit/audit.log | sudo audit2allow -M prometheus
|
||||
sudo semodule -i prometheus.pp
|
||||
#+END_SRC
|
||||
|
||||
Restart ~prometheus.service~ again.
|
||||
|
||||
The Prometheus web interface and dashboard should now be browsable at [[http://localhost:9090]]
|
||||
|
||||
** Install and configure Node Exporter on each client using Ansible ad-hoc
|
||||
This assumes you have an inventory file properly setup. The inventory file contains a host group labeled 'homelab', which contains a list of all hosts you want to install node_exporter on.
|
||||
|
||||
Place the following ~node_exporter.service~ into the same directory as the inventory file:
|
||||
#+BEGIN_SRC systemd
|
||||
[Unit]
|
||||
Description=Node Exporter
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
|
||||
[Service]
|
||||
User=node_exporter
|
||||
Group=node_exporter
|
||||
Type=simple
|
||||
Restart=on-failure
|
||||
RestartSec=5s
|
||||
ExecStart=/usr/local/bin/node_exporter
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
#+END_SRC
|
||||
|
||||
Download and unpack the latest version of node_exporter. This note assumes it is version 1.8.2.
|
||||
#+BEGIN_SRC shell
|
||||
ansible -i inventory.yml homelab -a "wget https://github.com/prometheus/node_exporter/releases/download/v1.8.2/node_exporter-1.8.2.linux-amd64.tar.gz"
|
||||
ansible -i inventory.yml homelab -m ansible.builtin.shell -a "echo '6809dd0b3ec45fd6e992c19071d6b5253aed3ead7bf0686885a51d85c6643c66 node_exporter-1.8.2.linux-amd64.tar.gz' | sha256sum -c"
|
||||
ansible -i inventory.yml homelab -m ansible.builtin.shell -a "tar xfz node_exporter-1.8.2.linux-amd64.tar.gz"
|
||||
ansible -i inventory.yml homelab -m ansible.builtin.shell -a "sudo mv node_exporter-1.8.2.linux-amd64/node_exporter /usr/local/bin/"
|
||||
ansible -i inventory.yml homelab -m ansible.builtin.shell -a "rm -rf node_exporter-1.8.2.linux-amd64*"
|
||||
ansible -i inventory.yml homelab -m ansible.builtin.shell -a "sudo useradd -rs /bin/false node_exporter"
|
||||
ansible -b -i inventory.yml homelab -m ansible.builtin.copy -a "src=node_exporter.service dest=/etc/systemd/system/node_exporter.service"
|
||||
ansible -b -i inventory.yml homelab -m ansible.builtin.systemd_service -a "daemon_reload=true"
|
||||
ansible -b -i inventory.yml homelab -m ansible.builtin.systemd_service -a "name=node_exporter enabled=true state=started"
|
||||
#+END_SRC
|
||||
|
||||
Node Exporter should now be installed, started, and enabled on each host with the homelab label in the inventory.
|
||||
|
||||
To confirm that statistics are being collected on each host, navigate to ~http://host_url:9100~. A page entitled Node Exporter should be displayed containg a link for Metrics. Click the link and confirm that statics are being collected.
|
||||
|
||||
Note that each node_exporter host must be accessible through the firewall on port 9100. Firewalld can be configured for the ~internal~ zone on each host.
|
||||
#+BEGIN_SRC shell
|
||||
sudo firewall-cmd --zone=internal --permanent --add-source=<my_ip_addr>
|
||||
sudo firewall-cmd --zone=internal --permanent --add-port=9100/tcp
|
||||
#+END_SRC
|
||||
|
||||
#+BEGIN_QUOTE
|
||||
Note: I have to configure the ~internal~ zone on Firewalld to allow traffic from my IP address on ports HTTP, HTTPS, SSH, and 1965 in order to access, for example, my web services on the node_exporter host.
|
||||
#+END_QUOTE
|
||||
|
||||
** Configure Prometheus to monitor the client nodes
|
||||
Edit ~/etc/prometheus/prometheus.yml~. My Prometheus configuration looks like this:
|
||||
#+BEGIN_SRC yaml
|
||||
# my global config
|
||||
global:
|
||||
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
|
||||
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
|
||||
# scrape_timeout is set to the global default (10s).
|
||||
|
||||
# Alertmanager configuration
|
||||
alerting:
|
||||
alertmanagers:
|
||||
- static_configs:
|
||||
- targets:
|
||||
# - alertmanager:9093
|
||||
|
||||
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
|
||||
rule_files:
|
||||
# - "first_rules.yml"
|
||||
# - "second_rules.yml"
|
||||
|
||||
# A scrape configuration containing exactly one endpoint to scrape:
|
||||
# Here it's Prometheus itself.
|
||||
scrape_configs:
|
||||
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
|
||||
- job_name: "prometheus"
|
||||
|
||||
# metrics_path defaults to '/metrics'
|
||||
# scheme defaults to 'http'.
|
||||
|
||||
static_configs:
|
||||
- targets: ["localhost:9090"]
|
||||
|
||||
- job_name: "remote_collector"
|
||||
scrape_interval: 10s
|
||||
static_configs:
|
||||
- targets: ["hyperreal.coffee:9100", "box.moonshadow.dev:9100", "10.0.0.26:9100", "bttracker.nirn.quest:9100"]
|
||||
#+END_SRC
|
||||
|
||||
The job ~remote_collector~ scrapes metrics from each of the hosts running the node_exporter. Ensure that port ~9100~ is open in the firewall, and if it is a public-facing node, ensure that port ~9100~ can only be accessed from my IP address.
|
||||
|
||||
** Configure Prometheus to monitor qBittorrent client nodes
|
||||
For each qBittorrent instance you want to monitor, setup a Docker or Podman container with [[https://github.com/caseyscarborough/qbittorrent-exporter]]. The containers will run on the machine running Prometheus so they are accessible at localhost. Let's say I have three qBittorrent instances I want to monitor.
|
||||
#+BEGIN_SRC shell
|
||||
podman run \
|
||||
--name=qbittorrent-exporter-0 \
|
||||
-e QBITTORRENT_USERNAME=username0 \
|
||||
-e QBITTORRENT_PASSWORD=password0 \
|
||||
-e QBITTORRENT_BASE_URL=http://localhost:8080 \
|
||||
-p 17871:17871 \
|
||||
--restart=always \
|
||||
caseyscarborough/qbittorrent-exporter:latest
|
||||
|
||||
podman run \
|
||||
--name=qbittorrent-exporter-1 \
|
||||
-e QBITTORRENT_USERNAME=username1 \
|
||||
-e QBITTORRENT_PASSWORD=password1 \
|
||||
-e QBITTORRENT_BASE_URL=https://qbittorrent1.tld \
|
||||
-p 17872:17871 \
|
||||
--restart=always \
|
||||
caseyscarborough/qbittorrent-exporter:latest
|
||||
|
||||
podman run \
|
||||
--name=qbittorrent-exporter-2 \
|
||||
-e QBITTORRENT_USERNAME=username2 \
|
||||
-e QBITTORRENT_PASSWORD=password2 \
|
||||
-e QBITTORRENT_BASE_URL=https://qbittorrent2.tld \
|
||||
-p 17873:17871 \
|
||||
--restart=always \
|
||||
caseyscarborough/qbittorrent-exporter:latest
|
||||
#+END_SRC
|
||||
|
||||
*** Using systemd quadlets
|
||||
#+BEGIN_SRC systemd
|
||||
[Unit]
|
||||
Description=qbittorrent-exporter
|
||||
After=network-online.target
|
||||
|
||||
[Container]
|
||||
Image=docker.io/caseyscarborough/qbittorrent-exporter:latest
|
||||
ContainerName=qbittorrent-exporter
|
||||
Environment=QBITTORRENT_USERNAME=username
|
||||
Environment=QBITTORRENT_PASSWORD=password
|
||||
Environment=QBITTORRENT_BASE_URL=http://localhost:8080
|
||||
PublishPort=17871:17871
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target default.target
|
||||
#+END_SRC
|
||||
|
||||
Now add this to the ~scrape_configs~ section of ~/etc/prometheus/prometheus.yml~ to configure Prometheus to scrape these metrics.
|
||||
#+BEGIN_SRC yaml
|
||||
- job_name: "qbittorrent"
|
||||
static_configs:
|
||||
- targets: ["localhost:17871", "localhost:17872", "localhost:17873"]
|
||||
#+END_SRC
|
||||
|
||||
** Monitor Caddy with Prometheus and Loki
|
||||
*** Caddy: metrics activation
|
||||
Add the ~metrics~ global option and ensure the admin endpoint is enabled.
|
||||
#+BEGIN_SRC caddyfile
|
||||
{
|
||||
admin 0.0.0.0:2019
|
||||
servers {
|
||||
metrics
|
||||
}
|
||||
}
|
||||
#+END_SRC
|
||||
|
||||
Restart Caddy:
|
||||
#+BEGIN_SRC shell
|
||||
sudo systemctl restart caddy
|
||||
sudo systemctl status caddy
|
||||
#+END_SRC
|
||||
|
||||
*** Caddy: logs activation
|
||||
I have my Caddy configuration modularized with ~/etc/caddy/Caddyfile~ being the central file. It looks something like this:
|
||||
#+BEGIN_SRC caddyfile
|
||||
{
|
||||
admin 0.0.0.0:2019
|
||||
servers {
|
||||
metrics
|
||||
}
|
||||
}
|
||||
|
||||
## hyperreal.coffee
|
||||
import /etc/caddy/anonoverflow.caddy
|
||||
import /etc/caddy/breezewiki.caddy
|
||||
import /etc/caddy/cdn.caddy
|
||||
...
|
||||
#+END_SRC
|
||||
|
||||
Each file that is imported is a virtual host that has its own separate configuration and corresponds to a subdomain of hyperreal.coffee. I have logging disabled on most of them except the ones for which troubleshooting with logs would be convenient, such as the one for my Mastodon instance. For ~/etc/caddy/fedi.caddy~, I've added these lines to enable logging:
|
||||
#+BEGIN_SRC caddyfile
|
||||
fedi.hyperreal.coffee {
|
||||
log {
|
||||
output file /var/log/caddy/fedi.log {
|
||||
roll_size 100MiB
|
||||
roll_keep 5
|
||||
roll_keep_for 100d
|
||||
}
|
||||
format json
|
||||
level INFO
|
||||
}
|
||||
}
|
||||
#+END_SRC
|
||||
|
||||
Restart caddy.
|
||||
#+BEGIN_SRC shell
|
||||
sudo systemctl restart caddy
|
||||
sudo systemctl status caddy
|
||||
#+END_SRC
|
||||
|
||||
Ensure port ~2019~ can only be accessed by my IP address, using Firewalld's internal zone:
|
||||
#+BEGIN_SRC shell
|
||||
sudo firewall-cmd --zone=internal --permanent --add-port=2019/tcp
|
||||
sudo firewall-cmd --reload
|
||||
sudo firewall-cmd --info-zone=internal
|
||||
#+END_SRC
|
||||
|
||||
Add the Caddy configuration to the ~scrape_configs~ section of ~/etc/prometheus/prometheus.yml~:
|
||||
#+BEGIN_SRC yaml
|
||||
- job_name: "caddy"
|
||||
static_configs:
|
||||
- targets: ["hyperreal.coffee:2019"]
|
||||
#+END_SRC
|
||||
|
||||
Restart Prometheus on the monitor host:
|
||||
#+BEGIN_SRC shell
|
||||
sudo systemctl restart prometheus.service
|
||||
#+END_SRC
|
||||
|
||||
*** Loki and Promtail setup
|
||||
On the node running Caddy, install the loki and promtail packages:
|
||||
#+BEGIN_SRC shell
|
||||
sudo apt install -y loki promtail
|
||||
#+END_SRC
|
||||
|
||||
Edit the Promtail configuration file at ~/etc/promtail/config.yml~:
|
||||
#+BEGIN_SRC yaml
|
||||
- job_name: caddy
|
||||
static_configs:
|
||||
- targets:
|
||||
- localhost
|
||||
labels:
|
||||
job: caddy
|
||||
__path__: /var/log/caddy/*.log
|
||||
agent: caddy-promtail
|
||||
pipeline_stages:
|
||||
- json:
|
||||
expressions:
|
||||
duration: duration
|
||||
status: status
|
||||
- labels:
|
||||
duration:
|
||||
status:
|
||||
#+END_SRC
|
||||
|
||||
The entire Promtail configuration should look like this:
|
||||
#+BEGIN_SRC yaml
|
||||
# This minimal config scrape only single log file.
|
||||
# Primarily used in rpm/deb packaging where promtail service can be started during system init process.
|
||||
# And too much scraping during init process can overload the complete system.
|
||||
# https://github.com/grafana/loki/issues/11398
|
||||
|
||||
server:
|
||||
http_listen_port: 9080
|
||||
grpc_listen_port: 0
|
||||
|
||||
positions:
|
||||
filename: /tmp/positions.yaml
|
||||
|
||||
clients:
|
||||
- url: http://localhost:3100/loki/api/v1/push
|
||||
|
||||
scrape_configs:
|
||||
- job_name: system
|
||||
static_configs:
|
||||
- targets:
|
||||
- localhost
|
||||
labels:
|
||||
job: varlogs
|
||||
#NOTE: Need to be modified to scrape any additional logs of the system.
|
||||
__path__: /var/log/messages
|
||||
|
||||
- job_name: caddy
|
||||
static_configs:
|
||||
- targets:
|
||||
- localhost
|
||||
labels:
|
||||
job: caddy
|
||||
__path__: /var/log/caddy/*log
|
||||
agent: caddy-promtail
|
||||
pipeline_stages:
|
||||
- json:
|
||||
expressions:
|
||||
duration: duration
|
||||
status: status
|
||||
- labels:
|
||||
duration:
|
||||
status:
|
||||
#+END_SRC
|
||||
|
||||
Restart Promtail and Loki services:
|
||||
#+BEGIN_SRC shell
|
||||
sudo systemctl restart promtail
|
||||
sudo systemctl restart loki
|
||||
#+END_SRC
|
||||
|
||||
To ensure that the promtail user has permissions to read caddy logs:
|
||||
#+BEGIN_SRC shell
|
||||
sudo usermod -aG caddy promtail
|
||||
sudo chmod g+r /var/log/caddy/*.log
|
||||
#+END_SRC
|
||||
|
||||
The [[http://localhost:9090/targets][Prometheus dashboard]] should now show the Caddy target with a state of "UP".
|
||||
|
||||
** Monitor TOR node
|
||||
Edit ~/etc/tor/torrc~ to add Metrics info. ~x.x.x.x~ is the IP address where Prometheus is running.
|
||||
#+BEGIN_SRC shell
|
||||
## Prometheus exporter
|
||||
MetricsPort 0.0.0.0:9035 prometheus
|
||||
MetricsPortPolicy accept x.x.x.x
|
||||
#+END_SRC
|
||||
|
||||
Configure FirewallD to allow inbound traffic to port ~9035~ on the internal zone. Ensure the internal zone's source is the IP address of the server where Prometheus is running. Ensure port ~443~ is accessible from the Internet on FirewallD's public zone.
|
||||
#+BEGIN_SRC shell
|
||||
sudo firewall-cmd --zone=internal --permanent --add-source=x.x.x.x
|
||||
sudo firewall-cmd --zone=internal --permanent --add-port=9035/tcp
|
||||
sudo firewall-cmd --zone=public --permanent --add-service=https
|
||||
sudo firewall-cmd --reload
|
||||
#+END_SRC
|
||||
|
||||
Edit ~/etc/prometheus/prometheus.yml~ to add the TOR config. ~y.y.y.y~ is the IP address where TOR is running.
|
||||
#+BEGIN_SRC yaml
|
||||
scrape_configs:
|
||||
- job_name: "tor-relay"
|
||||
static_configs:
|
||||
- targets: ["y.y.y.y:9035"]
|
||||
#+END_SRC
|
||||
|
||||
Restart Prometheus.
|
||||
#+BEGIN_SRC shell
|
||||
sudo systemctl restart prometheus.service
|
||||
#+END_SRC
|
||||
|
||||
Go to Grafana and import [[https://files.hyperreal.coffee/grafana/tor_stats.json][tor_stats.json]] as a new dashboard, using the Prometheus datasource.
|
||||
|
||||
** Monitor Synapse homeserver
|
||||
On the server running Synapase, edit ~/etc/matrix-synapse/homeserver.yaml~ to enable metrics.
|
||||
#+BEGIN_SRC yaml
|
||||
enable_metrics: true
|
||||
#+END_SRC
|
||||
|
||||
Add a new listener to ~/etc/matrix-synapse/homeserver.yaml~ for Prometheus metrics.
|
||||
#+BEGIN_SRC yaml
|
||||
listeners:
|
||||
- port: 9400
|
||||
type: metrics
|
||||
bind_addresses: ['0.0.0.0']
|
||||
#+END_SRC
|
||||
|
||||
On the server running Prometheus, add a target for Synapse.
|
||||
#+BEGIN_SRC yaml
|
||||
- job_name: "synapse"
|
||||
scrape_interval: 1m
|
||||
metrics_path: "/_synapse/metrics"
|
||||
static_configs:
|
||||
- targets: ["hyperreal:9400"]
|
||||
#+END_SRC
|
||||
|
||||
Also add the Synapse recording rules.
|
||||
#+BEGIN_SRC yaml
|
||||
rule_files:
|
||||
- /etc/prometheus/synapse-v2.rules
|
||||
#+END_SRC
|
||||
|
||||
On the server running Prometheus, download the Synapse recording rules.
|
||||
#+BEGIN_SRC shell
|
||||
sudo wget https://files.hyperreal.coffee/prometheus/synapse-v2.rules -O /etc/prometheus/synapse-v2.rules
|
||||
#+END_SRC
|
||||
|
||||
Restart Prometheus.
|
||||
|
||||
Use [[https://files.hyperreal.coffee/grafana/synapse.json][synapse.json]] for Grafana dashboard.
|
61
qcow2.org
Normal file
61
qcow2.org
Normal file
@ -0,0 +1,61 @@
|
||||
#+title: QCOW2
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Mount qcow2 image
|
||||
Enable NBD on the host:
|
||||
#+begin_src shell
|
||||
sudo modprobe nbd max_part=8
|
||||
#+end_src
|
||||
|
||||
Connect qcow2 image as a network block device:
|
||||
#+begin_src shell
|
||||
sudo qemu-nbd --connect=/dev/nbd0 /path/to/image.qcow2
|
||||
#+end_src
|
||||
|
||||
Find the VM's partitions:
|
||||
#+begin_src shell
|
||||
sudo fdisk /dev/nbd0 -l
|
||||
#+end_src
|
||||
|
||||
Mount the partition from the VM:
|
||||
#+begin_src shell
|
||||
sudo mount /dev/nbd0p3 /mnt/point
|
||||
#+end_src
|
||||
|
||||
To unmount:
|
||||
#+begin_src shell
|
||||
sudo umount /mnt/point
|
||||
sudo qemu-nbd --disconnect /dev/nbd0
|
||||
sudo rmmod nbd
|
||||
#+end_src
|
||||
|
||||
** Resize qcow2 image
|
||||
Install guestfs-tools (required for virt-resize command):
|
||||
#+begin_src shell
|
||||
sudo dnf install -y guestfs-tools
|
||||
sudo apt install -y guestfs-tools libguestfs-tools
|
||||
#+end_src
|
||||
|
||||
To resize qcow2 images, you'll have to create a new qcow2 image with the size you want, then use ~virt-resize~ on the old qcow2 image to the new one.
|
||||
|
||||
You'll need to know the root partition within the old qcow2 image.
|
||||
|
||||
Create a new qcow2 image with the size you want:
|
||||
#+begin_src shell
|
||||
qemu-img create -f qcow2 -o preallocation=metadata newdisk.qcow2 100G
|
||||
#+end_src
|
||||
|
||||
Now resize the old one to the new one:
|
||||
#+begin_src shell
|
||||
virt-resize --expand /dev/vda3 olddisk.qcow2 newdisk.qcow2
|
||||
#+end_src
|
||||
|
||||
Once you boot into the new qcow2 image, you'll probably have to adjust the size of the logical volume if it has LVM:
|
||||
#+begin_src shell
|
||||
sudo lvresize -l +100%FREE /dev/mapper/sysvg-root
|
||||
#+end_src
|
||||
|
||||
Then resize the XFS root partition within the logical volume:
|
||||
#+begin_src shell
|
||||
sudo xfs_grow /dev/mapper/sysvg-root
|
||||
#+end_src
|
80
qemu.org
Normal file
80
qemu.org
Normal file
@ -0,0 +1,80 @@
|
||||
#+title: QEMU
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Take snapshot of VM
|
||||
#+begin_src shell
|
||||
sudo virsh domblklist vm1
|
||||
|
||||
Target Source
|
||||
-----------------------------------------------
|
||||
vda /var/lib/libvirt/images/vm1.img
|
||||
#+end_src
|
||||
|
||||
#+begin_src shell
|
||||
sudo virsh snapshot-create-as \
|
||||
--domain vm1 \
|
||||
--name guest-state1 \
|
||||
--diskspec vda,file=/var/lib/libvirt/images/overlay1.qcow2 \
|
||||
--disk-only \
|
||||
--atomic \
|
||||
--quiesce
|
||||
#+end_src
|
||||
|
||||
Ensure ~qemu-guest-agent~ is installed inside the VM. Otherwise omit the ~--quiesce~ flag, but when you restore the VM it will be as if the system had crashed. Not that big of a deal since the VM's OS should flush required data and maintain consistency of its filesystems.
|
||||
|
||||
#+begin_src shell
|
||||
sudo rsync -avhW --progress /var/lib/libvirt/images/vm1.img /var/lib/libvirt/images/vm1-copy.img
|
||||
#+end_src
|
||||
|
||||
#+begin_src shell
|
||||
sudo virsh blockcommit vm1 vda --active --verbose --pivot
|
||||
#+end_src
|
||||
|
||||
** Full disk backup of VM
|
||||
Start the guest VM:
|
||||
#+begin_src shell
|
||||
sudo virsh start vm1
|
||||
#+end_src
|
||||
|
||||
Enumerate the disk(s) in use:
|
||||
#+begin_src shell
|
||||
sudo virsh domblklist vm1
|
||||
|
||||
Target Source
|
||||
-------------------------------------------------
|
||||
vda /var/lib/libvirt/images/vm1.qcow2
|
||||
#+end_src
|
||||
|
||||
Begin the backup:
|
||||
#+begin_src shell
|
||||
sudo virsh backup-begin vm1
|
||||
|
||||
Backup started
|
||||
#+end_src
|
||||
|
||||
Check the job status. "None" means the job has likely completed.
|
||||
#+begin_src shell
|
||||
sudo virsh domjobinfo vm1
|
||||
|
||||
Job type: None
|
||||
#+end_src
|
||||
|
||||
Check the completed job status:
|
||||
#+begin_src shell
|
||||
sudo virsh domjobinfo vm1 --completed
|
||||
|
||||
Job type: Completed
|
||||
Operation: Backup
|
||||
Time elapsed: 182 ms
|
||||
File processed: 39.250 MiB
|
||||
File remaining: 0.000 B
|
||||
File total: 39.250 MiB
|
||||
#+end_src
|
||||
|
||||
Now we see the copy of the backup:
|
||||
#+begin_src shell
|
||||
sudo ls -lash /var/lib/libvirt/images/vm1.qcow2*
|
||||
|
||||
15M -rw-r--r--. 1 qemu qemu 15M May 10 12:22 vm1.qcow2
|
||||
21M -rw-------. 1 root root 21M May 10 12:23 vm1.qcow2.1620642185
|
||||
#+end_src
|
16
raid.org
Normal file
16
raid.org
Normal file
@ -0,0 +1,16 @@
|
||||
#+title: RAID
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Mount RAID1 mirror
|
||||
- ~/dev/sda1~
|
||||
- ~/dev/sdb1~
|
||||
|
||||
Assemble the RAID array:
|
||||
#+begin_src shell
|
||||
sudo mdadm --assemble --run /dev/md0 /dev/sda1 /dev/sdb1
|
||||
#+end_src
|
||||
|
||||
Mount the RAID device:
|
||||
#+begin_src shell
|
||||
sudo mount /dev/md0 /mnt
|
||||
#+end_src
|
47
re-hd.org
Normal file
47
re-hd.org
Normal file
@ -0,0 +1,47 @@
|
||||
#+title: Resident Evil HD
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Installation
|
||||
1. Download [[https://archive.org/details/resident-evi-classicl-triple-pack-pc.-7z][Resident Evil Classic Triple Pack PC]] from archive.org. This contains the Sourcenext versions of all three games.
|
||||
|
||||
2. Install all three games using their installers.
|
||||
|
||||
3. Download the following files:
|
||||
- [[https://classicrebirth.com/index.php/download/biohazard-pc-cd-rom-patch-version-1-01/][Biohazard PC CD-ROM Mediakite patch version 1.01]]
|
||||
- [[https://classicrebirth.com/index.php/downloads/resident-evil-classic-rebirth/][Resident Evil Classic REbirth]]
|
||||
- [[https://classicrebirth.com/index.php/downloads/resident-evil-2-classic-rebirth/][Resident Evil 2 Classic REbirth]]
|
||||
- [[https://classicrebirth.com/index.php/downloads/resident-evil-3-classic-rebirth/][Resident Evil 3 Classic REbirth]]
|
||||
- [[https://archive.org/details/biohazard-mediakite][Biohazard Mediakite]]
|
||||
- [[https://www.moddb.com/mods/resident-evil-hd-mod/downloads/resident-evil-hd-mod][Resident Evil HD mod by TeamX]]
|
||||
- [[https://www.moddb.com/mods/resident-evil-2-hd-mod/downloads/resident-evil-2-hd-mod][Resident Evil 2 HD mod by TeamX]]
|
||||
- [[https://www.moddb.com/mods/resident-evil-3-hd-mod/downloads/resident-evil-3-hd-mod][Resident Evil 3 HD mod by TeamX]]
|
||||
- [[https://www.moddb.com/mods/resident-evil-seamless-hd-project/downloads/resident-evil-seamless-hd-project-for-pc-mediakite][Resident Evil Seamless HD Project v1.1]]
|
||||
- [[https://www.moddb.com/mods/resident-evil-2-seamless-hd-project/downloads/resident-evil-2-seamless-hd-project-for-pc-sourcenext][Resident Evil 2 Seamless HD Project v2.0]]
|
||||
- [[https://www.moddb.com/mods/resident-evil-3-nemesis-seamless-hd-project/downloads/resident-evil-3-nemesis-seamless-hd-project-for-pc-sourcenext][Resident Evil 3: Nemesis Seamless HD Project v2.0]]
|
||||
|
||||
4. Open the Biohazard Mediakite disc image with 7zip and drag the JPN folder from the disc into ~C:\Program Files (x86)\Games Retro\Resident Evil Classic~
|
||||
|
||||
*** Resident Evil Director's Cut
|
||||
Extract the following files to ~%ProgramFiles(x86)%\Games Retro\Resident Evil Classic~:
|
||||
- ~Biohazard.exe~ from Mediakite v1.01
|
||||
- ~ddraw.dll~ from Resident Evil Classic REbirth
|
||||
- All from Resident Evil HD mod by TeamX
|
||||
- All from Resident Evil Seamless HD Project v1.1
|
||||
|
||||
*** Resident Evil 2
|
||||
Extract the following files to ~%ProgramFiles(x86)%\Games Retro\BIOHAZARD 2 PC~:
|
||||
- ~ddraw.dll~ from Resident Evil 2 Classic REbirth
|
||||
- All from Resident Evil 2 HD mod by TeamX
|
||||
- All from Resident Evil 2 Seamless HD Project v2.0
|
||||
|
||||
*** Resident Evil 3: Nemesis
|
||||
Extract the following files to ~%ProgramFiles(x86)%\Games Retro\BIOHAZARD 3 PC~:
|
||||
- ~ddraw.dll~ from Resident Evil 3 Classic REbirth
|
||||
- All from Resident Evil 3 HD mod by TeamX
|
||||
- All from Resident Evil 3: Nemesis Seamless HD Project v2.0
|
||||
|
||||
** Testing
|
||||
Test each game by launching them with the following config changes:
|
||||
- Resolution 1280x960
|
||||
- RGB88 colors
|
||||
- Disable texture filtering
|
20
retropie.org
Normal file
20
retropie.org
Normal file
@ -0,0 +1,20 @@
|
||||
#+title: RetroPie
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Bluetooth: protocol not available
|
||||
#+begin_src shell
|
||||
sudo apt install pulseaudio-module-bluetooth
|
||||
#+end_src
|
||||
|
||||
Add to ~/lib/systemd/system/bthelper@.service~:
|
||||
#+begin_src systemd
|
||||
ExecStartPre=/bin/sleep 4
|
||||
#+end_src
|
||||
|
||||
#+begin_src shell
|
||||
sudo systemctl start sys-subsystem-bluetooth-devices-hci0.device
|
||||
sudo hciconfig hci0 down
|
||||
sudo killall pulseaudio
|
||||
systemctl --user enable --now pulseaudio.service
|
||||
sudo systemctl restart bluetooth.service
|
||||
#+end_src
|
22
systemd.org
Normal file
22
systemd.org
Normal file
@ -0,0 +1,22 @@
|
||||
#+title: Systemd
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Mount NFS share
|
||||
Create a unit file at ~/etc/systemd/system/mnt-backup.mount~. The name of the unit file must match the ~Where~ directive. Ex. ~Where=/mnt/backup~ --> ~mnt-backup.mount~.
|
||||
#+BEGIN_SRC systemd
|
||||
[Unit]
|
||||
Description=borgbackup NFS share from TrueNAS (10.0.0.81)
|
||||
DefaultDependencies=no
|
||||
Conflicts=umount.target
|
||||
After=network-online.target remote-fs.target
|
||||
Before=umount.target
|
||||
|
||||
[Mount]
|
||||
What=10.0.0.81:/mnt/coffeeNAS/backup
|
||||
Where=/mnt/backup
|
||||
Type=nfs
|
||||
Options=defaults
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
#+END_SRC
|
237
voidlinux.org
Normal file
237
voidlinux.org
Normal file
@ -0,0 +1,237 @@
|
||||
#+title: Void Linux
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Install on encrypted Btrfs
|
||||
Source: [[https://gist.github.com/gbrlsnchs/9c9dc55cd0beb26e141ee3ea59f26e21][Void Linux Installation Guide]]
|
||||
|
||||
First, update xbps.
|
||||
#+begin_src shell
|
||||
xbps-install -Syu xbps
|
||||
#+end_src
|
||||
|
||||
*** Partition disk
|
||||
Install ~gptfdisk~.
|
||||
#+begin_src shell
|
||||
xbps-install -Sy gptfdisk
|
||||
#+end_src
|
||||
|
||||
Run gdisk.
|
||||
#+begin_src shell
|
||||
gdisk /dev/nvme1n1
|
||||
#+end_src
|
||||
|
||||
Create the following partitions:
|
||||
| Partition Type | Size |
|
||||
|----------------+-----------------|
|
||||
| EFI | +600M |
|
||||
| boot | +900M |
|
||||
| root | Remaining space |
|
||||
|
||||
Create the filesystems.
|
||||
#+begin_src shell
|
||||
mkfs.vfat -nBOOT -F32 /dev/nvme1n1p1
|
||||
mkfs.ext4 -L grub /dev/nvme1n1p2
|
||||
cryptsetup luksFormat --type=luks -s=512 /dev/nvme1n1p3
|
||||
cryptsetup open /dev/nvme1n1p3 cryptroot
|
||||
mkfs.btrfs -L void /dev/mapper/cryptroot
|
||||
#+end_src
|
||||
|
||||
Mount partitions and create Btrfs subvolumes.
|
||||
#+begin_src shell
|
||||
mount -o defaults,compress=zstd:1 /dev/mapper/cryptroot /mnt
|
||||
btrfs subvolume create /mnt/root
|
||||
btrfs subvolume create /mnt/home
|
||||
umount /mnt
|
||||
mount -o defaults,compress=zstd:1,subvol=root /dev/mapper/cryptroot /mnt
|
||||
mkdir /mnt/home
|
||||
mount -o defaults,compress=zstd:1,subvol=home /dev/mapper/cryptroot /mnt/home
|
||||
#+end_src
|
||||
|
||||
Create Btrfs subvolumes for parts of the filesystem to exclude from snapshots. Nested subvolumes are not included in snapshots.
|
||||
#+begin_src shell
|
||||
mkdir -p /mnt/var/cache
|
||||
btrfs subvolume create /mnt/var/cache/xbps
|
||||
btrfs subvolume create /mnt/var/tmp
|
||||
btrfs subvolume create /mnt/srv
|
||||
btrfs subvolume create /mnt/var/swap
|
||||
#+end_src
|
||||
|
||||
Mount EFI and boot partitions.
|
||||
#+begin_src shell
|
||||
mkdir /mnt/efi
|
||||
mount -o rw,noatime /dev/nvme1n1p1 /mnt/efi
|
||||
mkdir /mnt/boot
|
||||
mount -o rw,noatime /dev/nvme1n1p2 /mnt/boot
|
||||
#+end_src
|
||||
|
||||
*** Base system installation
|
||||
If using ~x86_64~:
|
||||
#+begin_src shell
|
||||
REPO=https://mirrors.hyperreal.coffee/voidlinux/current
|
||||
ARCH=x86_64
|
||||
#+end_src
|
||||
|
||||
If using musl:
|
||||
#+begin_src shell
|
||||
REPO=https://mirrors.hyperreal.coffee/voidlinux/current/musl
|
||||
ARCH=x86_64-musl
|
||||
#+end_src
|
||||
|
||||
Install the base system.
|
||||
#+begin_src shell
|
||||
XBPS_ARCH=$ARCH xbps-install -S -R "$REPO" -r /mnt base-system base-devel btrfs-progs cryptsetup vim sudo dosfstools mtools void-repo-nonfree
|
||||
#+end_src
|
||||
|
||||
*** chroot
|
||||
Mount the pseudo filesystems for the chroot.
|
||||
#+begin_src shell
|
||||
for dir in dev proc sys run; do mount --rbind /$dir /mnt/$dir; mount --make-rslave /mnt/$dir; done
|
||||
#+end_src
|
||||
|
||||
Copy DNS configuration.
|
||||
#+begin_src shell
|
||||
cp -v /etc/resolv.conf /mnt/etc/
|
||||
#+end_src
|
||||
|
||||
Chroot.
|
||||
#+begin_src shell
|
||||
PS1='(chroot) # ' chroot /mnt/ /bin/bash
|
||||
#+end_src
|
||||
|
||||
Set hostname.
|
||||
#+begin_src shell
|
||||
echo "hostname" > /etc/hostname
|
||||
#+end_src
|
||||
|
||||
Set timezone.
|
||||
#+begin_src shell
|
||||
ln -sf /usr/share/zoneinfo/America/Chicago /etc/localtime
|
||||
#+end_src
|
||||
|
||||
Synchronize the hardware clock.
|
||||
#+begin_src shell
|
||||
hwclock --systohc
|
||||
#+end_Src
|
||||
|
||||
If using glibc, uncomment ~en_US.UTF-8~ from ~/etc/default/libc-locales~. Then run:
|
||||
#+begin_src shell
|
||||
xbps-reconfigure -f glibc-locales
|
||||
#+end_src
|
||||
|
||||
Set root password.
|
||||
#+begin_src shell
|
||||
passwd root
|
||||
#+end_src
|
||||
|
||||
Configure ~/etc/fstab~.
|
||||
#+begin_src shell
|
||||
UEFI_UUID=$(blkid -s UUID -o value /dev/nvme1n1p1)
|
||||
GRUB_UUID=$(blkid -s UUID -o value /dev/nvme1n1p2)
|
||||
ROOT_UUID=$(blkid -s UUID -o value /dev/mapper/cryptroot)
|
||||
|
||||
cat << EOF > /etc/fstab
|
||||
UUID=$ROOT_UUID / btrfs defaults,compress=zstd:1,subvol=root 0 1
|
||||
UUID=$UEFI_UUID /efi vfat defaults,noatime 0 2
|
||||
UUID=$GRUB_UUID /boot ext4 defaults,noatime 0 2
|
||||
UUID=$ROOT_UUID /home btrfs defaults,compress=zstd:1,subvol=home 0 2
|
||||
tmpfs /tmp tmpfs defaults,nosuid,nodev 0 0
|
||||
EOF
|
||||
#+end_src
|
||||
|
||||
Setup Dracut. A "hostonly" install means that Dracut will generate a lean initramfs with everything you need.
|
||||
#+begin_src shell
|
||||
echo "hostonly=yes" >> /etc/dracut.conf
|
||||
#+end_src
|
||||
|
||||
If you have an Intel CPU:
|
||||
#+begin_src shell
|
||||
xbps-install -Syu intel-ucode
|
||||
#+end_src
|
||||
|
||||
Install GRUB.
|
||||
#+begin_src shell
|
||||
xbps-install -Syu grub-x86_64-efi os-prober
|
||||
grub-install --target=x86_64-efi --efi-directory=/efi --bootloader-id="Void Linux"
|
||||
#+end_src
|
||||
|
||||
If you are dual-booting with another OS:
|
||||
#+begin_src shell
|
||||
echo "GRUB_DISABLE_OS_PROBER=0" >> /etc/default/grub
|
||||
#+end_src
|
||||
|
||||
Setup encrypted swapfile.
|
||||
#+begin_src shell
|
||||
truncate -s 0 /var/swap/swapfile
|
||||
chattr +C /var/swap/swapfile
|
||||
chmod 600 /var/swap/swapfile
|
||||
dd if=/dev/zero of=/var/swap/swapfile bs=1G count=16 status=progress
|
||||
mkswap /var/swap/swapfile
|
||||
swapon /var/swap/swapfile
|
||||
|
||||
RESUME_OFFSET=$(btrfs inspect-internal map-swapfile -r /var/swap/swapfile)
|
||||
cat << EOF >> /etc/default/grub
|
||||
GRUB_CMDLINE_LINUX="resume=UUID-$ROOT_UUID resume_offset=$RESUME_OFFSET"
|
||||
EOF
|
||||
#+end_src
|
||||
|
||||
Regenerate configurations.
|
||||
#+begin_src shell
|
||||
xbps-reconfigure -fa
|
||||
#+end_src
|
||||
|
||||
Install Xorg and Xfce.
|
||||
#+begin_src shell
|
||||
xbps-install -Syu xorg xfce4
|
||||
#+end_src
|
||||
|
||||
If you have a recent Nvidia GPU:
|
||||
#+begin_src shell
|
||||
xbps-install -Syu nvidia
|
||||
#+end_src
|
||||
|
||||
Add user.
|
||||
#+begin_src shell
|
||||
useradd -c "Jeffrey Serio" -m -s /usr/bin/zsh -U jas
|
||||
passwd jas
|
||||
echo "jas ALL=(ALL) NOPASSWD: ALL" | tee -a /etc/sudoers.d/jas
|
||||
#+end_src
|
||||
|
||||
Enable system services.
|
||||
#+begin_src shell
|
||||
for svc in "NetworkManager" "crond" "dbus" "lightdm" "ntpd" "snapperd" "sshd"; do
|
||||
ln -sf /etc/sv/$svc /var/service;
|
||||
done
|
||||
#+end_src
|
||||
|
||||
Disable bitmap fonts.
|
||||
#+begin_src shell
|
||||
ln -sf /usr/share/fontconfig/conf.avail/70-no-bitmaps.conf /etc/fonts/conf.d/
|
||||
xbps-reconfigure -f fontconfig
|
||||
#+end_src
|
||||
|
||||
Setup package repository.
|
||||
#+begin_src shell
|
||||
echo "repository=https://mirrors.hyperreal.coffee/voidlinux/current" | tee /etc/xbps.d/00-repository-main.conf
|
||||
|
||||
# For musl
|
||||
echo "repository=https://mirrors.hyperreal.coffee/voidlinux/current/musl" | tee /etc/xbps.d/00-repository-main.conf
|
||||
#+end_src
|
||||
|
||||
Setup Pipewire for audio.
|
||||
#+begin_src shell
|
||||
mkdir -p /etc/pipewire/pipewire.conf.d
|
||||
ln -sf /usr/share/examples/wireplumber/10-wireplumber.conf /etc/pipewire/pipewire.conf.d/
|
||||
ln -sf /usr/share/applications/pipewire.desktop /etc/xdg/autostart/
|
||||
#+end_src
|
||||
|
||||
Generate configurations.
|
||||
#+begin_src shell
|
||||
xbps-reconfigure -fa
|
||||
#+end_src
|
||||
|
||||
Exit chroot, unmount disks, and reboot.
|
||||
#+begin_src shell
|
||||
exit
|
||||
umount -lR /mnt
|
||||
reboot
|
||||
#+end_src
|
27
windows.org
Normal file
27
windows.org
Normal file
@ -0,0 +1,27 @@
|
||||
#+title: Microsoft Windows
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Repair boot files
|
||||
- Download Windows 11 ISO from Microsoft and write to USB.
|
||||
- Boot into Windows setup utility.
|
||||
- Select Repair computer -> Troubleshoot -> Advanced -> Cmd prompt
|
||||
|
||||
This procedure assumes the following:
|
||||
- main disk is ~disk 0~
|
||||
- EFI partition is ~part 1~
|
||||
- Windows OS drive letter is ~c:~
|
||||
|
||||
The following commands will format the old EFI partition, mount it to ~s:~, and copy the boot files to it:
|
||||
#+begin_src shell
|
||||
diskpart
|
||||
> list disk
|
||||
> sel disk 0
|
||||
> list part
|
||||
> sel part 1
|
||||
> format fs=fat32 quick label=System
|
||||
> list vol
|
||||
> exit
|
||||
mountvol S: /S
|
||||
bcdboot c:\windows /s s: /f UEFI /v
|
||||
exit
|
||||
#+end_src
|
29
zfs.org
Normal file
29
zfs.org
Normal file
@ -0,0 +1,29 @@
|
||||
#+title: ZFS
|
||||
#+setupfile: ../org-templates/page.org
|
||||
|
||||
** Difference between scrub and resilver
|
||||
Lifted from [[https://serverfault.com/users/246427/haravikk][Haravikk]] on [[https://anonoverflow.hyperreal.coffee/exchange/serverfault.com/questions/1007438/zfs-scrub-vs-resilver-are-they-equivalent][ServerFault]].
|
||||
|
||||
#+BEGIN_QUOTE
|
||||
The main scrubbing and resilvering processes in ZFS are essentially identical – in both cases records are being read and verified, and if necessary written out to any disk(s) with invalid (or missing) data.
|
||||
|
||||
Since ZFS is aware of which records a disk should have, it won't bother trying to read records that shouldn't exist. This means that during resilvering, new disks will see little or no read activity as there's nothing to read (or at least ZFS doesn't believe there is).
|
||||
|
||||
This also means that if a disk becomes unavailable and then available again, ZFS will resilver only the new records created since the disk went unavailable. Resilvering happens automatically in this way, whereas scrubs typically have to be initiated (either manually, or via a scheduled command).
|
||||
|
||||
There is also a special "sequential resilver" option for mirrored vdevs that can be triggered using zpool attach -s or zpool replace -s – this performs a faster copy of all data without any checking, and initiates a deferred scrub to verify integrity later. This is good for quickly restoring redundancy, but should only be used if you're confident that the existing data is correct (you run regular scrubs, or scrubbed before adding/replacing).
|
||||
|
||||
Finally there are some small differences in settings for scrub and resilver - in general a resilver is given a higher priority than a scrub since it's more urgent (restoring/increasing redundancy), though due to various factors this may not mean a resilver is faster than a scrub depending upon write speed, number of record copies available etc.
|
||||
|
||||
For example, when dealing with a mirror a resilver can be faster since it doesn't need to read from all disks, but only if the new disk is fast enough (can be written to at least as quickly as the other disk(s) are read from). A scrub meanwhile always reads from all disks, so for a mirror vdev it can be more intensive. For a raidz1 both processes will read from all (existing) disks, so the resilver will be slower as it also requires writing to one, a raidz2 doesn't need to read all disks so might gain a little speed and so-on.
|
||||
|
||||
Basically there's no concrete answer to cover every setup. 😉
|
||||
|
||||
Specifically with regards to the original question:
|
||||
|
||||
If you know a disk has failed and want to replace it, and are using a mirrored vdev, then a sequential resilver + scrub (zpool replace -s) will be faster in terms of restoring redundancy and performance, but it'll take longer overall before you know for sure that the data was fully restored without any errors since you need to wait for the deferred scrub. A regular resilver will take longer to finish copying the data, but is verified the moment it finishes.
|
||||
|
||||
However, if you're talking about repairing data on a disk you still believe to be okay then a scrub is the fastest option, as it will only copy data which fails verification, otherwise the process is entirely reading and checking so it's almost always going to be faster.
|
||||
|
||||
In theory a resilver can be just as fast as a scrub, or even faster (since it's higher priority), assuming you are copying onto a suitably fast new drive that's optimised for continuous writing. In practice though that's usually not going to be the case.
|
||||
#+END_QUOTE
|
Loading…
Reference in New Issue
Block a user