This program archives the content of fandom wikis.
It's pretty much feature-complete. I still have to add detailed comments to describe what each significant piece of code is doing.
This program doesn't scrape from the fandom.com wiki sites directly; rather, it uses my [[https://wiki.hyperreal.coffee][BreezeWiki]] instance to avoid downloading unnecessary ads, images, and other junk.
Each resulting archive is self-contained, meaning one can extract the contents and browse the wiki snapshot locally (offline). The URLs for CSS, images, and links in each page are replaced by the ~file:///~ URLs for their corresponding pages on the local filesystem.