"I want to download the entire collection. How do I do that?"

Good question! While looking into how to download a site myself I came across a method that you, yes you, can use.

NOTICE: I HAVE ONLY TESTED THIS ON LINUX (MINT). I HAVE BEEN TOLD THAT IT WORKS ON MAC AS WELL, BUT I HAVE NO IDEA IF THAT IS TRUE. AS FOR WINDOWS, I'M SORRY I HAVE NO IDEA.

This tutorial is optimised for Linux Mint, I assume it will work on other distros, but please do your own research first if you're unsure.

I have done this myself with another site and believe this will work on any web page.

What you will need

A computer or laptop with Linux as its operating system (again, I've heard this works on MacOS too, but I've never even touched a Mac, so don't quote me on this); an internet connection; basic understanding of the terminal
I did this on my laptop 'IAN', running Linux Mint 6 faye (cinnamon 6.4.8) with barely 3GB of RAM, that I use for storage. So yes, even your shit brick can do this.

Step 1: Software

Wget is a terminal program (meaning it has no UI), which is crucial for doing this. Without it you cannot do this. You can, there are other software, but what I tell you here will not apply to those. Linux Mint comes preinstalled with wget, so if you are on that OS you should already have it. If not, type the following command. "sudo apt install wget" If you've installed packages before, you'll know this is just standard. Install commands on other distros are different, so look up a tutorial online. I'm only covering Mint as that's what I'm familiar with and feel comfortable writing this tutorial about. If you already have it your computer will tell you so.

Step 2: Open the site

The browser you use does not matter. I prefer firefox, as again, it came with my install of Linux Mint. Go to the site and copy the URL. MAKE SURE YOU GET THE HYPERTEXT PROTOCOL (https://) AS WELL. SOME BROWSERS (LIKE CHROME) LIKE TO HIDE IT BUT YOU NEED TO COPY IT FOR THIS TO WORK.

Step 3: Terminal work

Open your terminal, on most distros it's pinned to the taskbar. If you're on Linux, I assume I don't have to walk you through how to use the terminal, but I've included this just in case.
The command you want to type in is "wget --recursive --no-clobber --page-requisites --convert-links --domains neocities.org --no-parent *LINK TO THE PAGE YOU WANT*". In the place that says *LINK YOU THE PAGE YOU WANT*, you paste in the page you want. As you know, my site is divided into sections based on kind. You can download just seperate kinds of graphics, for instance youd paste "https://adriansblinkiecollection.neocities.org/stamps" if you just wanted stamps. Downloading the main page will give you everything. What happens next is your computer will connect to neocities and start pulling every image. If you haven't used the terminal before it looks scary having all that text scroll by, but I can assure you it's fine. Depending on your internet connection and neocities' connection this could take a while. Just let it do its thing and go outside or something.

Step 4: Accessing.

So the terminals done. How do you get to it? Well, there should be a folder in the root of your file system. If you're on Mint, like me, open files and press home if it doesn't automatically open there. You'll see a folder labelled "neocities.org" click it and then click through the subfolders until you reach the graphics. And congrats! That's all there is to it. Have fun. Though I apologise if I update this place in the future and your copy is outdated.

Clarification

What you are doing here is essentially downloading my site, as it is in it's current state. This gets you the same result as though you were individually clicking on every blinkie, stamp and button on the site and saving them to your computer, minus the headache of doing that. You can do the same by hand if you have an hour or so. This is just a faster method. I've recieved emails asking if I could send over the entire collection (which I've done, since it's publically avaliable), which took a lot of hassling with gmails size limits and having to split up the categories into 10 zip folders.
I obtained this method from the r/datahorder subreddit