Dead Memory Cards and Using Docker

More often that it feels like it should, something in technology breaks or fails. I find that this can be frustrating, but often ultimately good, especially for learning something new, and forcing myself to clean up something I’ve been meaning to clean up. I have a Raspberry Pi I’ve been using for a while for several things as a little web server. It’s been running probably for years, but something gave out on it. I’m not entirely sure it’s the SD card or the Pi itself honestly, because I’ve been having a bit of trouble trying to recover through both. It’s sort of pushed me to try a different approach a bit.

But first I needed a new SD card. I have quite a few, most are “in use”. I say “in use” because many are less in use and more, underused. This has resulted in doing a bit of rebuild on some other projects to make better use of my Micro SD cards. The starting point was a 8 GB card with just a basic Raspbian set up on it.

So, for starters, I found that the one I have in my recently set up music station Raspberry Pi is a whopping 128gb. Contrary to what one might thing, I don’t need a 128gb card in my music station, the music is stored on the NAS over the network. It also has some old residual projects on it that should really be cleaned out.

So stuck the 8GB card in that device and did the minor set up needed for the music station. Specifically, configure VLC for Remote Control over the network, then add the network share. Once I plugged it back into my little mixer and verified I could remote play music, I moved on.

This ended up being an unrelated side project though, because I had been planning on getting a large, speedy, Micro SD card to stick in my Retroid Pocket. So I stuck that 128GB card in, the Retroid and formatted it. This freed up a smaller, 32GB card.

I also have a 64GB that is basically not being used in my PiGrrl Project I decided to recover back for use. The project was fun, but the Retroid does the same thing 1000x better. So now it’s mostly just a display piece on a shelf. Literally an overpriced paperweight. I don’t want to lose the PiGrrl configuration though, because it’s been programmed up to work with the small display and IO Control Inputs. So I imaged that card off.

In the end though, I didn’t end up needing those Micro SD cards though, I opted for an alternative option to replace the Pi, with Docker on my secondary PC. I’ve been meaning to better learn Docker, though I still find it to be a weird and obtuse bit of software. There are a handful of things I care about restoring that I used the Pi for.

  • Youtube DL – There seem to be quite a few nice Web Interfaces for this that will work much better than my old custom system.
  • WordPress Blog Archives – I have exported data files from this but I would like to have it as a WordPress Instance again
  • FreshRSS – My RSS Reader. I already miss my daily news feeds.

YoutubeDL was simple, they provided a nice basic command sequence to get things working.

The others were a bit trickier. Because the old set up died unexpectedly, The data isn’t easily exported for import, which means digging out and recovering off of the raw database files. This isn’t the first time this has happened, but its a lot bigger pain, which isn’t helped by not being entirely confident in how to manipulate Docker.

I still have not gotten the WordPress archive working actually. I was getting “Connection Reset” errors and now I am getting “Cannot establish Database connection” issues. It may be for nothing after the troubles I have had dealing with recovering FreshRSS.

I have gotten FreshRSS fixed though. Getting it running in Docker was easy peasy. Getting my data back, was… considerably less so. It’s been plaguing me now when I try to fix it for a few weeks now, but I have a solution. It’s not the BEST solution, but it’s… a solution. So, the core thing I needed were the feeds themselves. Lesson learned I suppose, but I’m going to find a way to automate a regular dump of the feeds once everything is reloaded. I don’t need or care about favorited articles or the articles contents. These were stored in a MySQL database. MySQL, specifically seems to be what was corrupted and crashed out on the old Pi/Instance because I get a failed message on boot and i can’t get it to reinstall or load anymore.

Well, more, I am pretty sure the root cause is the SD card died, but it affected the DB files.

My struggle now, is recovering data from these raw files. I’ve actually done this before on a surver crash years ago, but this round has lead to many many hurdles. One, 90% of the results looking up how to do it are littered with unhelpful replies about using a proper SQL dump instead. If I could open MySQL, I sure as hell would so that. Another issue seems to be that the SQL server running on the Pi was woefully out of date, so there have been file compatibility problems.

There is also the issue that the data may just flat out BE CORRUPTED.

So I’ve spun up and tried to manually move the data to probably a dozen instances of MySQL and MariaDB of various versions, on Pis, in Docker, on WSL, in a Linux install. Nothing, and I mean NOTHING has worked.

I did get the raw data pulled out though.

So I’ve been brute forcing a fix. Opening the .ibd file in a text editor gives a really ugly chuck of funny characters. But, strewn throughout this, is a bunch of URLs for feeds and websites and well, mostly that. i did an open “Replace” in Notepad++ that stripped out a lot of the characters. Then I opened up Pycharm, I did a find and replace with blanks on a ton of other ugly characters. Then I write up this wuick and dirty Python Script:

# Control F in Notepad++, replace, extended mode "\x00"
# Replace "   " with " "
# replace "https:" with " https:"
# rename to fresh.txt

## Debug and skip asking each time
file = "fresh.txt"
## Open and read the Log File supploed
with open(file, encoding="UTF-8") as logfile:
    log = logfile.read()

datasplit = log.split(" ")
links = []

for each in datasplit:
    if "http" in each:
        links.append(each)

with open("output.txt", mode="w", encoding="UTF-8") as writefile:
    for i in links:
        writefile.write(i+"\n")

Which splits everything up into an array, and skims through the array for anything with “http” in it, to pull out anything that is a URL. This has left me with a text file that is full of duplicates and has regular URLs next to Feed URLS, though not in EVERY case because that would be too damn easy. I could probably add a bunch of conditionals to the script to sort out anything with the word “feed” “rss”, “atom” or “xml” and get a lot of the cruft removed, but Fresh RSS does not seem to have a way to bulk import a text list, so I still get to manually cut and paste each URL in and resort everything into categories.

It’s tedious, but it’s mindless, and it will get done.

Afterwards I will need to reset up my WordPress Autoposter script for those little news digests I’ve been sharing that no one cares about.

Slight update, I added some filtering ans sorting to the code:

# Control F in Notepad++, replace, extended mode "\x00"
# Replace "   " with " "
# replace "https:" with " https:"
# rename to fresh.txt


## Debug and skip asking each time
file = "fresh.txt"
## Open and read the Log File supploed
with open(file, encoding="UTF-8") as logfile:
    log = logfile.read()

datasplit = log.split(" ")
links = []

for each in datasplit:
    if "http" in each:
        if "feed" in each or "rss" in each or "default" in each or "atom" in each or "xml" in each:
            if each not in links:
                links.append(each[each.find("http"):])

links.sort()

with open("output.txt", mode="w", encoding="UTF-8") as writefile:
    for i in links:
        writefile.write(i+"\n")

SQL Woes

For the most part, managing my web server is pretty straightforward, especially because I don’t really get a ton of traffic. Its mostly just keeping things up to date through standard channels.

Occasionally I have a bit of a brain fart moment. I recently was doing regular Linux updates on the server. I noticed a message I had seen before about some packages being held back. Occasionally I will go through and update these, because I am not real sure why they are being held back, but don’t really see any reason they should be.

Then MySQL broke.

So I went digging in some logs and searching for solutions, and decided I needed to roll back the version. Following a guide I found, I discovered… I had done this before, which I now vaguely remebered. Because the old .deb file was still there from last time I broke it.

Anyway, this didn’t fix it, MySQL still was not launching.

I decided that maybe it was time to just switch to MariaDB, which I believe is the spiritual successor to MySQL. And the process was simple enough, I would not even need to dump my Databases. So I uninstalled MySQL, installed MariaDB and… It worked!

Then it stopped working.

I restarted the SQL service and it worked!

Then it…. Stopped working… Again…

So I checked logs again and corrected some issues there and again it worked, then a half hour or so later it stopped working.

One thing I had come across in troubleshooting the original MySQL issue was that there was a command, mysql_upgrade that needed to be run to change how some tables are configured. I couldn’t do that before because I couldn’t even get MySQL to run. But I could get MariaDB to run at least for a bit, and had successfully gotten this upgrade command to run.

So I decided to, once again, try MySQL again, so I uninstalled MariaDB, and purged everything out, rebooting a few times to be sure. And MySQL would not even install anymore, so more purging, this time, including the Databases.

One thing I was glad I had decided to do, “Just in Case” when MariaDB was “working” was dump the databases out with backups. I was glad I did at this point. So with absolutely everything purged, MySQL installed and was working.

I set about recreating the databases from the dumps, and while I was at it updated all the passwords, since I had to recreate the user accounts used by WordPress anyway.

And now everything is working smoothly again.

A couple of links that were actually helpful in solving my problem.

https://stackoverflow.com/questions/67564215/problems-installing-mysql-on-ubuntu-20-04

https://learnubuntu.com/install-mysql/

Fixing Cron Not Executing

Recently I encountered an issue I hadn’t run into before. Specifically, my Cron Jobs were not running. Everything seemed correct and I could manually run the commands at the CLI. I’ve had some issue before with getting things to run because I wasn’t using the complete path for programs but this seemed to be something different.

The problem I found was that the root password needed to be changed. Running the following:

sudo  grep CRON /var/log/syslog

Would output a long list of the same issue repeating over and over.

May 27 10:30:01 Webserver CRON[12943]: Authentication token is no longer valid; new one required
May 27 10:39:01 Webserver CRON[12978]: Authentication token is no longer valid; new one required
May 27 10:39:01 Webserver CRON[12977]: Authentication token is no longer valid; new one required
May 27 10:40:01 Webserver CRON[13049]: Authentication token is no longer valid; new one required

When running the following command:

 sudo chage -l root

Would output something like:

Password expires               : never
Password inactive              : never
Account expires                : never

Which suggests the root password has never changed. So I ran the following command:

sudo passwd root

And set a new root password (which was the same as the old root password) and suddenly everything started working again. It felt like a really odd issue, especially considering I didn’t actually change the password, and as far as I could tell I had a root password. Plus the password wasn’t set to expire at all.

Anyway, I wrapped it off by doing an (optional) truncation of the system log. Since the file had become unwieldingly huge with the following.

sudo truncate -s 0 /var/log/syslog

Migrating Mail-In-A-Box to a New VPS

A few years ago, I started running my own mail server using Mail-In-A-Box. Four years or so actually, if the age of my old server was accurate. I have several different email addresses, mostly to better segment out content. I have done this with Reddit, and Twitter, and TT-RSS, and probably other things. In my Mail-In-A-Box I run email for 3 domains, two of mine, one for my wife’s. Overtime I may eventually migrate all of my email to it, at this point, I am a little worried about being blacklisted, so I mostly use it for secondary, receive only, email aggregation.

For a while I’ve been putting off migrating the system to a new VPS. It’s been running on Ubuntu 14.04 since it was created. Newer MiaB won’t run on 14.04 and I can’t distro update the machine. The only choice is to roll a new VPS and migrate the mail.

I use Digital Ocean for my online services, feel free to sign up with the link in the side bar if you want, I get a little kickback if you do. It’s easy to use and affordable. Plus in cases like this, I can spin up an extra VPS, then easily destroy it and spin up a new one, when I discover that MiaB only works up through 18.04, so 20.04, which I used initially, won’t work. Also having the extra server just means a temporary bump in my billing for the month.

The basic process for migrating Mail-In-A-Box is here, in the official documentation. I had a few hiccups along the way but I got them ironed out.

First step was creating the new machine. I mentioned above, I first made a 20.04 machine, but found that doesn’t work, so I killed that and made a new 18.04 machine. Before anything else, I did a few security based housecleaning tasks. The server was creating with Shared Keys log in set up, but it only had a root account. So I created a new user and made them a sudoer. I also copied the SSH keys from root to the user.

adduser Username
usermod -aG sudo Username
cp ~/.ssh /home/Username
chown Username:Username /home/Username/.ssh -R

Next step was to add the new user to the SSH users and secure up that access.

sudo pico /etc/ssh/sshd_config

Then edit:

#Port 22

To a custom port and change:

PermitRootLogin no

Finally add:

AllowUsers Username

Lastly restart the ssh server with sudo service sshd restart. Then test the connection using the regular user. If that works, then disconnect from the root session and continue on the regular user.

I was doing an upgrade but the fresh install guide is here. All I needed was the set up line really, which takes a minute to run but does an initial set up of Mail-in-a-Box.

curl -s https://mailinabox.email/setup.sh | sudo -E bash

The next part was the trickiest bit. I linked the migration article above but I ended up trying to simplify things a bit. On the old machine, I stopped the mailinabox service, so no new mail would come in, then ran the backup python script as described int he article above. I found it was easiest to just connect to the server using Filezilla using SSH FTP, which meant importing my keys to Filezilla. It’s in the settings under SFTP. Something to keep in mind if you set a custom port is you’ll need to add sftp:// before the IP address.

Things are a little tricky here, since root owns the backup folder. I ended up doing a sudo copy into my user home directory, then a chown on the folder to give my user account access to the folder. This meant Filezilla could see the folder and download it to my local machine. There are way to directly transfer between the new and old server, but between custom ports and SSH keys and permissions, I found it was easiest just to download to my local laptop. Afterwards, I connected with SFTP to the NEW server, and pushed the backup folder to the new server. You need the whole folder with the “secret_key” text file and the encrypted folder and files. Basically, this is all the settings and emails.

Next step was to ssh into the New Server, go to the freshly uploaded backup directory, and import the old files, as described in the link. This is two commands run, separately.

export PASSPHRASE=$(cat secret_key.txt)

sudo -E duplicity restore --force file:///home/Username/backup/encrypted /home/user-data/

This takes a minute to run. The next step listed is to rerun the mailinabox set up with “sudo mailinabox”.

I had trouble here. Nginx would not restart. After sound troubleshooting I found it was an issue with SSL. Basically what seemed to happen was the restore, pulled the old SSL certs. Or maybe it was looking for the old SSL certs. Whatever the case, the fix was this process.

rm -rf /home/user-data/ssl/*

The fix was to delete the SSL certificates. then run “sudo mailinabox”. Everything started up. I verified I could log into the control panel and the mailbox using the UP address of the new server. I verified that all my custom DNS records existed, these are needed since the Glue Records point to the Mail-In-A-Box machine but because I host my websites on a separate machine, I have to have DNS records set up appropriately.

One thing I noticed was the SSL Certificates seemed to be wrong, which meant things worked, but would cause annoying security messages. I am not sure if this was related to deleting the certs above, or just that it was still looking for the old IP address. Whatever the case, I did a manual update with certbox for my MiaB Subdomain using

sudo certbot certonly --force-renewal -d Subdomain.Domain.comHere

Another minor issue I ran into, doing this needs to drop a file either in the webroot folder, or spin up a temporary web server to host it’s own file. I couldn’t find the webroot for the custom MiaB set up (it was not /var/www/html) so I temporarily ran “sudo service nginx stop”, then ran the above certbox command, using a temporary webserver option, then “sudo service nginx start” to restart Nginx. NGinx had to be stopped since otherwise it is using Port 80, and the temporary server can’t start to runt he certificate verification process.

Another note, I am not sure if the –force-renewal option is needed above. It didn’t throw out any errors and it fixed the issue, so I left it.

The final step was to go to my Domain Registrar and update the name servers and Glue Records to point to the new Server IP. After a short bit of waiting, eventually the mail server URL connected to the admin and web consoles. I did some test send and receive of emails between my server and gmail to verify everything was working properly. One nice bit, the newer MiaB has a different interface for Roundcube webmail, so I could easily tell if I was going to the new or old server.

Once everything was satisfactory, i went back to Digital Ocean and powered down the old server. If everything is still working in a few days, I will destroy the old server, so I don’t have to keep paying upkeep on it. One thing to keep in mind, both the old and new servers require a specific hostname, so they will be named the same, so double check that you are powering down and deleting the correct server. some easy ways to verify are IP address, or server age. The old server is several years old but the new server is several days old.

Tools I Use: Netscan and Fing

I wanted to do some occasional posts on some tools I use for various technical tasks.  Partially just to suggest some useful stuff, partially so I have some posts to reference anytime I reference said stuff.

I wanted to start off with Netscan and Fing, which serve the same basic purpose on two different platforms.  Both of these tools will scan the local IP range and return a list of every device connected to the network.  Netscan is what I use on windows, Fing is what I use on Android.

I use these tools very frequently, several times a week on average.  So what use is scanning the local network anyway?  I have two main uses, though both come down to Device Discovery.

Firstly, basic device discovery.  I’ve hooked something new to the network and I need to access it.  A lot of what I connect is headless with no easy way of discovering the IP aside from a scan.  An Arduino, a Raspberry Pi, a networked Webcam, all of these things need to be found once connected.  The scan is also useful for getting the MAC address of devices on the network.  The IP is dynamic on a network by network basis, a MAC address is a unique identifier.  Knowing the MAC address is useful for building firewall rules and setting up static IPs assigned by the router for devices like phones or laptops where assigning IPs on the device can get hairy.

The other reason for doing a network based scan is intrusion detection.  Generally speaking, I don’t expect to see hackers or anything on my home network.  This is more for checking things like “if my kids’ devices are connected” or occasionally if one of my kids has a new device borrowed or whatever that I am not aware of on the network.

Ultimately I want to set up a little network monitoring system on a server to do these sorts of checks in real time but both of these tools have served me well for years as doing the job quickly and simply.

Both are also useful for poking around foreign networks.  You can see what machines are on an open WiFi hotspot and see if they have any open shared files.  Though some open hotspots are smart enough to block such scans.