* httpd.service no longer uses the apachectl script and now uses the httpd executable instead.
* the netplan 00-installer-config.yaml file had a forgotten change for one of the network adapters.
For the network setup, it's quite jank: IPv4 is handled by netplan and IPv6 is handled by systemd-network. This is due to a historical (2017) setup. Initial IPv4 migration to systemd-network actually broke this a few years ago (around 2019), and I never attempted to fix this since. It should be possible now, though. I also learned to not uninstall netplan at that time...
Ubuntu Server, like a lot of distros, stored network configurations under a single filename: /etc/network/interfaces. Of course, starting and stopping them was a bit different.
With version 17.04, they changed it to netplan. Both IPv4 and IPv6 are handled there, and configs were stored under the /etc/netplan directory. Starting and stopping network interfaces changed as well.
With version 19.04, they changed it again to systemd-network. They helpfully migrated the configs to the appropriate /etc/systemd/network directory and had 'netplan' in the filenames. (Uninstalling netplan would be a bad idea here; you need renderer: networkd in a netplan .yaml file instead.)
--
That same year, I spent over 30 minutes attempting to migrate IPv4 over (IPv6 was successfully migrated and was disabled in netplan) but eventually gave up, mainly because I knew no better of the systemd-network formatting back then. So this pretty jank network interface setup that stayed for six years is a result of the double migration of interfaces.
This actually delays the migration by about a week. It may also need a firmware update - specifically, a may not occasionally boot scenario.
I also had an idea: relocating the database to another drive... again. I did this last time and it was a success, but it was with 2x 80GB IDE drives as a dm-integrity RAID 1 test.
The database is on the same drive setup as the rest of the webserver data... which is not the system drive.
The drive arrived today, and it's pretty abused. ~53% life left. This is fine to me, since this drive will not get a lot of writes from me, anyway.
My current Silicon Power MLC SSD's life left is 32%. 😏
Anyway, it also sorely needed a firmware update. This fixes the following:
- issue where the solid-state drive (SSD) does not resume from low-power mode
- issue where the system fails to boot when it is not turned on for a long period
I also updated the firmware of a Toshiba MQ01ACF050 500GB 7200RPM drive due to the following:
- Improved the shock handling capability of the hard drive in power-on state.
This server will be offline May 2, 2026 for the system migration.
I touched up the diodes in the battery circuit on the Supermicro board, thoroughly cleaned it up, and powered it on. The display happened to glitch up at that time, so instead of rebooting the system, I turned the display off and back on again... and the display was normal again.
I have two of these 20+ year old displays, and this panel is the first of the two... and it happens to have rare display issues, but the panel has no dying pixels, and the bezel is fine. It also has a matching back. The second one has (barely noticeable) dying pixels, but it functions perfectly otherwise... but it has a dent in the top right corner. It also has no matching back.
I'm going to swap the power board and Tcon from the second one into the first one and retire the other display. Dad has the second one (which replaced the first), so it goes back to him after this.
I'll have to retrieve another display for the retro setup, it seems.
And... it also seems that I finally managed to get the board to keep its BIOS settings.
That literally took over an hour to solve that.
It originally started with it not responding with the power button, then eventually I walked off to retrieve something (over two minutes), then tried again and realized it powered on...
Set the time and date, exit, power off the board then the PSU... wait ~2 minutes, power on the PSU, then the board. Forgot its settings.
Repeat this ad-nauseam between testing RAM for errors and CPU temp. Oh, and the display would glitch up at times. May have to retire that particular old Sony SDM-HS94P monitor.
Earlier, I disabled the BMC by jumper; the BMC was still zombie-enabled, but now the BIOS would no longer acknowledge it. Odd.
Eventually jumpered the Chassis Intrusion setting and then used the Set User Default settings in BIOS. From there, BMC was no longer running...
The board is in standby; it needs a SATA drive before the system migration.
Anyway, found some missing BMC firmware and one newer BMC firmware on the Internet. I am also obtaining the rest of the driver CDs, since an earlier attempt was done when I didn't even have enough space in general.
Forgot to mention: found one questionable file (possibly slightly corrupt). It was a driver CD image. Moved that one to 'questionable'.
One BCM/IPMI firmware was repacked and a few third-party source files were loose, so I repacked them into RAR files.
There actually was one official BMC/IPMI .zip file that was packed a little over a week before the known good checksum one, so it was moved to 'old'. The files themselves matched the ones in the known good one, too.
Found out the other old drive CD image is questionable as well. Going to have to re-download it. So both 2021-era driver CD image downloads have mismatched checksums...
I was just made aware that the Rev. C2 version of the AMD Phenom II X4 945 can be 125W TDP (most are 95W TDP), so that part is correct. So it does not account for the 95W Rev. C2 or C3 version of the CPU...
I went through the trouble of setting up a test server on a MiniPC (Intel NUC NUC6AYH), and it does run there. Though, I forgot to run lscpu for the instruction flags... so I searched for it online, and that NUC's CPU does indeed have the flag.
I then actually compiled the source on an even older Dell Latitude D830 and executed it. Same illegal instruction (SIGILL) as on the aging server. It does not have the flag.
It does run on my main laptop that has a high-end Sandy Bridge CPU, which has the flag.
--
From the AMD server and main laptop logs:
MariaDB 11.4: InnoDB: Using generic crc32 instructions
MariaDB 11.8: InnoDB: Using crc32 + pclmulqdq instructions
However, I found an old .tar image of the server 'www' data from September 6, 2023, taking up ~548 GiB on the www-data RAID setup. (The final one is .xz'ed, taking up ~449 GiB on the NAS.) Thought I purged it years ago, lol.
There's also some old HP laptop disk image on there from 2014 that I have never removed, taking up ~109 GiB; since copied to the NAS years later.
Another ~800 GiB used up there is a bunch of Teknoparrot ROMs just sitting there as well.
Very old versions of MySQL sit there; no longer can use them since the move to MariaDB.
There's more decade [plus]-old cruft on that setup that existed ever since the server used a 4x 250GB RAID0+1 setup in the early 2010s. Removing a lot of it would free up probably ~2TiB of space.
Somehow, one of the PHP -FPM configs wouldn't stick (I changed the listen user/group entry to what httpd is using and it was stuck on default nobody...). I had to edit it locally and not from my laptop. 🤨
php 7.4.33: fpm now consistently runs on Unix sockets instead of TCP/IP:Port, meaning connections within PHP are even faster. This was mainly due to merging out-of-date configurations that I thought were up-to-date. [Was originally CGI for the longest -> FastCGI for testing -> FPM TCP/IP:Port -> FPM Unix Sockets]
Other notes: MySQL/MariaDB did run on TCP/IP:Port due to Unix Sockets breaking at random historically, but that had been stabilized for at least a year.
As a result, I have finally upped the bandwidth limit from 786KB/1MB (rest of main site) and 393KB/512KB (certain download sections) to ~5MB and ~2.5MB respectively.
Other thing was Intel Core i3: 12100 -> 13100... but two pins located at the bottom-center of its socket on the board in question were quite bent. I managed to fix them perfectly, though.