My Computing Journey – Part 2 – Franklin PC-8000

Welcome to part 2 of my history with computers.

Let’s start getting to a much more robust part of my computer using history. The Commodore 64 certainly was the seed, but this machine was what really catapulted my interest. It’s the first “Real PC” I used growing up. Technically, like the Commodore and the next chapter, this computer belonged to my parents, but my friends and I used it a lot for quite a few things, beyond just gaming.

Though we did use it for gaming.

Gaming

Visually, it was actually kind of a downgrade from the Commodore. The C64 connected to a small color TV (which would eventually be my bedroom TV, with a big chunky couple of knobs on the front.) The Franklin has a monitor, with two colors, black and green. It had two whole 5.25″ drives in it, no 3.5″ disk drives and certainly not a hard drive. It did have a cool dot matrix printer though, which I’ll touch on a bit more in a bit.

It’s worth sidetracking a bit during this time period, and worth mentioning my out of state but fairly frequently visited grandpa had a Tandy 1000 machine, with a color monitor, and that was totally amazing. It pretty much overlaps with this same era of my home PC use and my grandpa was the source of essentially all programs and games I had at home. I have no idea where he got them, but I made copies of most everything he had and his disks were all copied from somewhere. Mostly I remember playing two titles on my grandpa’s PC, along with my brother and cousins. King’s Quest 1, which we never could figure out, but it was fun. And Leisure Suit Larry 1.

Now, Leisure Suit Larry, for the uninformed, is an old, “adult” game series. We did not know this, and we never did reach any of the old content, because like King’s Quest, we make some progress, and the game was funny, but we never did get past a certain point. Specifically, we never did figure out the door password (it’s Ken Sent Me), so we never could progress the plot beyond drinking at the bar, gambling at the casino and buying booze and “lubbers” at the convenience store. The age gate on this game was that it would ask a series of questions that “only adults would know”. So my cousin and I would load the game, then go to the kitchen where out parents were hanging out, and ask them the questions, to get the answers.

Anyway, I don’t believe either of these games worked on the Franklin PC we had at home, because it wasn’t EVGA.

There were others, but the two most notable games we played at home were The Ancient Art of War and Simcity. One was an early sort of RTS game, the other was well, Simcity. Both are notable here because they had user created content. Simcity is all about user created cities. Even with the limits of the game, I remember building out mirrored cities sometimes, then using the disasters to pretend they were at war. Simcity also had some DRM, because you had to enter the population of a city from a sheet of paper based on some hieroglyphs. My friend actually owned the game, so I would just, call him up and get the numbers, I also had a selection of them written down in a notebook, and would just close and reload the game until the random city was one I had marked down.

I also would use that sweet printer to print maps, because it was a feature of the first Simcity. You could print out your city, and it would spit out I think 16 sheets of paper that you could tape together in a 4×4 block of paper sheets and have a huge cool map poster. I may have one buried somewhere too. I would then color in all the zones with the correct colors with marker to make it look cool. You might wonder how I knew what the colors were, well, my grandpa had that TANDY 1000 and my friend had a color PC, so I was aware the game had colors, I just, didn’t get them.

Ancient Art of War let you make custom maps and missions. Which was so awesome and I spent a lot of time making maps. Assuming the data hasn’t been corrupter, I have copies of those maps somewhere, maybe I’ll post them. There were several other games I played a lot that also had user generated content. There was a golf game where you could make courses with dinosaurs and play as Jack Nicklaus. And me back then, had no idea what the Joker Guy (Jack Nicholson) had to do with golf but ok whatever. There was a baseball game called Earl Weaver Baseball where you could do custom teams, and I would make teams themed around video games, like a River City Ransom team, and a Mega Man team where every player had maxed out stats (because robots are perfect).

I think my point is, that this was part of the birth of my interest in digital creation. But not just for games.

Programming

During this time period, my dad was going to college through his job, and getting a degree in Computer Science. I have no idea what a Computer Science degree in the 80s involved, but I remember going to the graduation (vaguely). Or at least him graduating. At some point though, presumably because he was learning it as part of the curriculum, he taught me a bit of BASIC computer programming.

I would have been like, 8 or 9 at this point. I showed my friends how to do it as well, and we would make silly little useless programs that would print out funny patterns scrolling on the screen. Or “super secure password” systems, along the lines of:

10 IF password="SECRETPASSWORD" THEN
30   PRINT "SECRET DATA"
50 ELSE
70   PRINT "NO WAY LOSER"
90 END IF

I have no idea if that’s actually valid BASIC code, I pulled up a guide to IF/Else in BASIC and cobbled it together.

The point is, it was fun. And it was my first experience with actual programming.

Newsletters

Then there are the newsletters my friends and I would produce using a program called Newsmaster. I have this vague idea that this was the “start” of my writing desire and the newsletters we made, were the precursor to The Chaos Xone, my first website which evolved into Lameazoid.com. These were simple 1 page video game themed “newsletters”. You can actually read these here (Issue 2, Issue 3, Issue 4, if you want, translated into HTML. They are, as basic as you would expect for something produced by 10-12 year old kids.

But that was yet another growing seed of interest. So much started with this machine, a real actual PC with actual useful programs. This doesn’t even touch on the part where it was an 80s IBM PC, which meant booting to DOS, on a floppy, because there isn’t any hard drive in it at all. There was no Windows, it was all a CLI interface. Yet another skill and seed learned from this machine.

Weekly Wrap-Up (08.06.2023 to 08.12.2023)

Weekly Wrap-Up (07.30.2023 to 08.05.2023)

Another week, another weekly wrap-up. This is technically the trickiest post I’m doing right now, since it’s the only thing I can’t really pre-write up. If I were not trying to do 31 days of posts for Blaugust it wouldn’t matter, I’d just, skip it sometimes probably.

Not a lot new or exciting anyway. I had some craziness at work but I don’t blog about work.

Probably the most exciting thing this week was Guardians of the Galaxy 3 coming to Disney+, so I finally got to watch that. It’s not exactly glowing praise, but at the very least, I didn’t come off it thinking it was very “meh” like all of the newer MCU projects. Like Secret Invasion, which I think I finished last week. Secret Invasion was, ok, but not super impressive. A few things on GotG3, I kind of wish they had toned down some of the end a bit. Mild spoilers, as generic as possible but it felt like they needed to save that “large group of people”, entirely because for some reason every Marvel climax has to have huge stakes where the heroes have to save large groups of people. Like, maybe they could have just, gone there, and faced off with the bad guys and had a little fight, without all the extra.

The Activity Log

This part is partially just for my future reference, and it maybe deserves its own name. So we’ll call it the Activity Log.

No new toy stuff this week, not directly toys, I did buy another set of acrylic riser shelves from a daily deal off Amazon. I kind of needed one more set and now I’m pretty good for little shelves for a while. Being able to lift some of the stuff in the back side of a deeper shelf really helps with the aesthetics of a display.

I did pick up a bundle of games on Humble Bundle Because Baldur’s Gate III is the recently released hotness, Humble Bundle has a bundle (affiliate link) of older titles, including Baldur’s Gate I and II, Planescape Torment, Icewind Dale, and Neverwinter Nights. I think I may have a couple fo these on CDs somewhere but I don’t seem to have them on any modern platforms, and the bundle was cheap (a lot of HBs have become overpriced) so I figured why not.

Another one that is probably worth tracking a bit, if only for my own sanity, is books. Because I often will impulse buy books that seem interesting from Kindle Daily Deals (and elsewhere). This week’s books:

  • The Elderon Chronicles (Books 1-3): A Space Colonization Science Fiction Collection by Tarah Benner – I usually steer clear of these “bundles of Sci-Fi Books” but this one seemed appealing so I figured why not.
  • Bilbo’s Last Song by J.R.R. Tolkien – I have most of Tokien’s works already, and I didn’t have this one.
  • This Is Why We Can’t Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture by Whitney Phillips – I have a bad habit of picking up randomly are these sorts of, “cultural exploration with witty titles” books.
  • Nordic Tales: Folktales from Norway, Sweden, Finland, Iceland, and Denmark (Tales of Book 5) – I have a variety of these folktale books from different cultures because one of my many interests, in general, is various world cultures and history.

Lastly, there’s Music. I didn’t listen to anything new really, just a lot of stuff I already listen to. A little Nirvana, a little Orla Garland, a little Alanis Morrisette, a little Sigrid. Not trying anything out of the ordinary this week. It’s just been kind of a boring week overall I suppose.

Alanis Morissette – Jagged Little Pill

Here’s another album for the “This is already so popular” list of albums, Alanis Morissette’s Jagged Little Pill. Per Wikipedia, it’s the 13th biggest-selling album, ever, and the 3rd biggest put out by a woman. There is a good chance that you’ve at least heard a song from this album, somewhere. It’s an album that really sort of embodied a lot of the 90s feel at the time. It’s an album that I listened to a lot in High School and beyond, and it’s a strong strong contender for “Most listened to album”. I like to track music as much as possible these days with Last.fm, but there are a lot of gaps in that record, from the before times, and this is one of them. Others include Pink Floyd’s The Wall, Tom Petty and the Heartbreakers Greatest Hits, and probably a few Aerosmith albums.

Why cover this album now? Because in a few days, I’m going to see Alanis in concert at the Illinois State Fair. I don’t really have a “bucket list”, but if I did, going to an Alanis Morissette concert is one of them, even if it’s 25 years late. I have not really picked up on a lot of Alanis’ later music, though I want to. Supposed Former Infatuation Junkie is the only other album I have really listened to by her and it’s ok, but not quite as good as Jagged Little Pill. Something I wasn’t aware of until recently after watching the documentary Jagged, is that Alanis actually had a few albums before Jagged Little Pill that were essentially just regular boring pop music.

Which was part of what made this album blow up and become a huge hit. There was plenty of angry rock alternative music by dudes out there, but not a lot by women at the time. The whole album is this crazy ball of angry rage for a lot of its tracks. The first single from the album You Oughta Know has long been rumored to be about her former boyfriend Dave Coulier (Joey from Full House, the goofy guy) but it’s never been confirmed. With such lovely lyrics as

Cause the joke that you laid in the bed that was me
And I’m not gonna fade as soon as you close your eyes
And you know it
And every time I scratch my nails
Down someone else’s back
I hope you feel it
Well, can you feel it?

– You Oughta Know – Alanis Morissette

The lyrics in general are part of what really makes the album appealing. It’s all so poetically blunt at times, full of anger and trauma. It also becomes self-reflective and vulnerable in other places. It starts out very in your face with All I Really Want, You Oughta Know, and Right Through You. Even the slightly more subdued of the early tracks Perfect has a built to how it’s all just too much trying to be perfect. As the album goes on it becomes a lot more subdued, but still tells a string of stories about broken history and broken relationships.

Probably the most well-known track on the album is Ironic, which is an extremely popular and enjoyable song, but it’s also the subject of ridicule and jokes as most of the scenarios in the song are more straight tragic than actually ironic. Rain on your wedding day, ten thousand spoons when all you need is a knife, that sort of thing. The real Irony I suppose is a song called Ironic without any irony in it. I doubt it runs that deep though.

Probably my favorite tracks on the album are Hand in My Pocket and Mary Jane. I really like the whole building optimism of the former, and how it almost feels like it travels through stages of a life with it’s slightly evolving Chorus lyrics. Mary Jane is a nice slow ballad where Alanis really throws out those vocals.

This is also the other reason this album became so popular I think. It’s not just the lyrics, but the way they are delivered. No one thinks twice about scream-singing with male bands, but Alanis helped bring this concept to her music. She has a very distinctive almost yodeling screech at times in her voice which feels like it should be off-putting but instead, it just drives the whole energy of the album. It pushes the rage when needed. It pushes the 90s alternative “who gives a shit really?” vibe when needed. There is also a lot fo interesting almost folksy feeling to her tracks

There’s probably a reason Alanis Morissette never really ended up with a ton of staying power on her future works, because Jagged Little Pill just really embodied the times, and left an influential legacy on music, but released any other time, probably wouldn’t have even taken off at all. I definitely am not saying it’s a bad album, I am just saying that it probably just doesn’t resonate with people who weren’t there, so to speak.

Code Project – Goodreads RSS to HTML (Python)

I already have my Letterboxed watches set up to syndicate here, to this blog, for archival purposes mostly. When I log a movie in Letterboxed, a plug-in catches it from the RSS feed and makes a post. This is done using a plug-in called “RSS Importer” They aren’t the prettiest posts, I may look into adjusting the formatting with some CSS, but they are there. I really want to do the same for my Goodreads reading. Goodreads lists all have an RSS feed, so reason would have it that I could simply, put that feed into RSS Importer and have the same syndication happen.

For some reason, it throws out an error.

The feed shows as valid and even gives me a preview post, but for whatever reason, it won’t create the actual posts. This is probably actually ok, since the Goodreads RSS feed is weird and ugly. I’ll get more into that in a bit.

The Feed URL Is Here, at the bottom of each list.

I decided that I could simply, do it myself, with Python. One thing Python is excellent for is data retrieval and manipulation. I’m already doing something similar with my FreshRSS Syndication posts. I wanted to run through a bit of the process flow here though I used for creating this script. Partially because it might help people who are trying to learn programming and understand a bit more about how creating a program, at least a simple one, actually sort of works.

There were some basic maintenance tasks needing to be done. Firstly, I made sure I had a category on my WordPress site to accept the posts into. I had this already because I needed it trying to get RSS Importer to work. Secondly, I created a new project in PyCharm. Visual Studio Code works as well, any number of IDEs work, I just prefer PyCharm for Python. In my main.py file, I also added some commented-out bit at the header with URLs cut and pasted from Goodreads. I also verified these feeds actually worked with an RSS Reader.

For the actual code there are basically three steps to this process needed:

  • Retrieve the RSS feed
  • Process the RSS Feed
  • Post the processed data.

Part three here, is essentially already done. I can easily lift the code from my FreshRSS poster, replace the actual post data payload, and let it go. I can’t process data at all without data to process, so step one is to get the RSS data. I could probably work it out also from my FreshRSS script, but instead, I decided to just refresh my memory by searching for “Python Get RSS Feed”. Which brings up one of the two core points I want to make here in this post.

Programming is not about knowing all the code.

Programming is more often about knowing what process needs to be done, and knowing where and how to use the code needed. I don’t remember the exact libraries and syntax to get an RSS feed and feed it through Beautiful Soup. I know that I need to get an RSS feed, and I know I need Beautiful Soup.

My search returned this link, which I cribbed some code from, modifying the variables as needed. I basically skimmed through to just before “Outputting to a file”. I don’t need to output to a file, I can just do some print statements during debugging and then later it will all output to WordPress through a constructed string.

I did several runs along the way, finding that I needed to use lxml instead of xml in the features on the Beautiful Soup Call. I also opted to put the feed URL in a variable instead of directly in the code as the original post had it. It’s easy to swap out. I also did some testing by simply printing the output of “books” to make sure I was actually getting useful data, which I was.

At this point, my code looks something like this (not exactly but something like it:

import requests
from bs4 import BeautifulSoup
​
feedurl = "Goodreads URL HERE"
​
def goodreads_rss(feedurl):
   article_list = []    try:
       r = requests.get(feedurl)
       soup = BeautifulSoup(r.content, features='lxml')
       books = soup.findAll('item')                
       for a in books:
           title = a.find('title').text
           link = a.find('link').text
           published = a.find('pubDate').text            
           book = {
               'title': title,
               'link': link,
               'published': published
              }
           book_list.append(book)        
           return print(book_list)

print('Starting scraping')
goodreads_rss()
print('Finished scraping')

I was getting good data, and so Step 1 (above) was done. The real meat here is processing the data. I mentioned before, Goodreads gives a really ugly RSS feed. It has several tags for data in it, but they aren’t actually used for some reason. Here is a single sample of what a single book looks like:

<item>
<guid></guid>
<pubdate></pubdate>
<title></title>
<link/>
<book_id>5907</book_id>
<book_image_url></book_image_url>
<book_small_image_url></book_small_image_url>
<book_medium_image_url></book_medium_image_url>
<book_large_image_url></book_large_image_url>
<book_description>Written for J.R.R. Tolkien’s own children, The Hobbit met with instant critical acclaim when it was first published in 1937. Now recognized as a timeless classic, this introduction to the hobbit Bilbo Baggins, the wizard Gandalf, Gollum, and the spectacular world of Middle-earth recounts of the adventures of a reluctant hero, a powerful and dangerous ring, and the cruel dragon Smaug the Magnificent. The text in this 372-page paperback edition is based on that first published in Great Britain by Collins Modern Classics (1998), and includes a note on the text by Douglas A. Anderson (2001).]]&gt;</book_description>
<book id="5907">
<num_pages>366</num_pages>
</book>
<author_name>J.R.R. Tolkien</author_name>
<isbn></isbn>
<user_name>Josh</user_name>
<user_rating>4</user_rating>
<user_read_at></user_read_at>
<user_date_added></user_date_added>
<user_date_created></user_date_created>
<user_shelves>2008-reads</user_shelves>
<user_review></user_review>
<average_rating>4.28</average_rating>
<book_published>1937</book_published>
<description>
<img alt="The Hobbit (The Lord of the Rings, #0)" src="https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1546071216l/5907._SY75_.jpg"/><br/>
                                    author: J.R.R. Tolkien<br/>
                                    name: Josh<br/>
                                    average rating: 4.28<br/>
                                    book published: 1937<br/>
                                    rating: 4<br/>
                                    read at: <br/>
                                    date added: 2011/02/22<br/>
                                    shelves: 2008-reads<br/>
                                    review: <br/><br/>
                                    ]]&gt;
  </description>
</item>

Half the data isn’t within the useful tags, instead, it’s just down below the image tag inside the Description. Not all of it though. It’s ugly and weird. The other thing that REALLY sticks out here, if you skim through it, there is NO “title” attribute. The boot title isn’t (quite) even in the feed. Instead, it just has a Book ID, which is a number that, presumably, relates to something on Goodreads.

In the above code, there is a line “for a in books”, which starts a loop and builds an array of book objects. This is where all the data I’ll need later will go, for each book. in a format similar to what is show “title = a.find(‘title’).text”. First I pulled out the easy ones that I might want when later constructing the actual post.

  • num_pages
  • book_description
  • author_name
  • user_rating
  • isbn (Not every book has one, but some do)
  • book_published
  • img

Lastly, I also pulled out the “description” and set to work parsing it out. It’s just a big string, and it’s regularly formatted across all books, so I split it on the br tags. This gave me a list with each line as an entry in the list. I counted out the index for each list element and then split them again on “: “, assigning the value at index [1] (the second value) to various variables.

The end result is an array of book objects with usable data that I can later build into a string that will be delivered to WordPress as a post. The code at this point looks like this:

import requests
from bs4 import BeautifulSoup
​
url = "GOODREADS URL"
book_list = []
​
def goodreads_rss(feed_url):
   try:
       r = requests.get(feed_url)
       soup = BeautifulSoup(r.content, features='lxml')
       books = soup.findAll('item')
       for a in books:
           print(a)
           book_blob = a.find('description').text.split('<br/>')
           book_data = book_blob[0].split('\n                                     ')
           author = a.find('author_name').text
           isbn = a.find('isbn').text
           desc = a.find('book_description').text
           image = str(a.find('img'))
           title = str(image).split('"')[1]
           article = {
               'author': author,
               'isbn': isbn,
               'desc': desc,
               'title': title,
               'image': image,
               'published': book_data[4].split(": ")[1],
               'my_rating': book_data[5].split(": ")[1],
               'date_read': book_data[7].split(": ")[1],
               'my_review': book_data[9].split(": ")[1],
               # Uncomment for debugging
               #'payload': book_data,
              }
           book_list.append(article)
       return book_list
   except Exception as e:
       print('The scraping job failed. See exception: ')
       print(e)
​
print('Starting scraping')
for_feed = goodreads_rss(url)
for each in for_feed:
   print(each)

And a sample of the output looks something like this (3 books):

{'author': 'George Orwell', 'isbn': '', 'desc': ' When Animal Farm was first published, Stalinist Russia was seen as its target. Today it is devastatingly clear that wherever and whenever freedom is attacked, under whatever banner, the cutting clarity and savage comedy of George Orwell’s masterpiece have a meaning and message still ferociously fresh.]]>', 'title': 'Animal Farm', 'image': '<img alt="Animal Farm" src="https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1424037542l/7613._SY75_.jpg"/>', 'published': '1945', 'my_rating': '4', 'date_read': '2011/02/22', 'my_review': ''}
{'author': 'Philip Pullman', 'isbn': '0679879242', 'desc': "Can one small girl make a difference in such great and terrible endeavors? This is Lyra: a savage, a schemer, a liar, and as fierce and true a champion as Roger or Asriel could want--but what Lyra doesn't know is that to help one of them will be to betray the other.]]>", 'title': 'The Golden Compass (His Dark Materials, #1)', 'image': '<img alt="The Golden Compass (His Dark Materials, #1)" src="https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1505766203l/119322._SX50_.jpg"/>', 'published': '1995', 'my_rating': '4', 'date_read': '2011/02/22', 'my_review': ''}
{'author': 'J.R.R. Tolkien', 'isbn': '', 'desc': 'Written for J.R.R. Tolkien’s own children, The Hobbit met with instant critical acclaim when it was first published in 1937. Now recognized as a timeless classic, this introduction to the hobbit Bilbo Baggins, the wizard Gandalf, Gollum, and the spectacular world of Middle-earth recounts of the adventures of a reluctant hero, a powerful and dangerous ring, and the cruel dragon Smaug the Magnificent. The text in this 372-page paperback edition is based on that first published in Great Britain by Collins Modern Classics (1998), and includes a note on the text by Douglas A. Anderson (2001).]]>', 'title': 'The Hobbit (The Lord of the Rings, #0)', 'image': '<img alt="The Hobbit (The Lord of the Rings, #0)" src="https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1546071216l/5907._SY75_.jpg"/>', 'published': '1937', 'my_rating': '4', 'date_read': '2011/02/22', 'my_review': ''}

I still would like to get the Title, which isn’t an entry, but, each Image, uses the Book Title as its alt text. I can use the previously pulled-out “image” string to get this. The image result is a complete HTML Image tag and link. It’s regularly structured, so I can split it, then take the second entry (the title) and assign it to a variable. I should not have to worry about titles with quotes being an issue, since the way Goodreads is sending the payload, these quotes should already be removed or dealt with in some way, or the image tag itself wouldn’t work.

title = str(image).split('"')[1]

I’m not going to go super deep into the formatting process, for conciseness, but it’s not really that hard and the code will appear in my final code chunk. Basically, I want the entries to look like little cards, with a thumbnail image, and most of the data pulled into my array formatted out. I’ll mock up something using basic HTML code independently, then use that code to build the structure of my post string. It will look something like this when finished, with the variables stuck in place in the relevant points, so the code will loop through, and insert all the values:

post_array = []
for each in for_feed:
   post = f'<div class="book-card"> <div> <div class="book-image">' \
          f'{each["image"]}' \
          f'</div> <div class="book-info"> <h3 class="book-title">' \
          f'{each["title"]}' \
          f'</h3> <h4 class="book-author">' \
          f'{each["author"]}' \
          f'</h4> <p class="book-details">' \
          f'Published: {each["published"]} | Pages:{each["pages"]}' \
          f'</p> <p class="book-review">'
   if each["my_rating"] != "0":
          post += f'My Rating: {each["my_rating"]}/5<br>'
   post+= f'{each["my_review"]}' \
          f'</div> </div> <div class="book-description"> <p class="book-summary">' \
          f'Description: {each["desc"]}' \
          f'</p> </div> </div>'
​
   print(post)
   post_array.append(post)

I don’t use all of the classes added, but I did add custom classes to everything, I don’t want to have to go back and modify my code later if I want to add more formatting. I did make a bit of simple CSS that can be added to the WordPress custom CSS (or any CSS actually, if you just wanted to stick this in a webpage) to make some simple cards. They should center in whatever container they get stuck inside, in my case, it’s the WordPress column.

.book-card {
background-color: #DDDDDD;
width: 90%;
margin: 20px auto;
padding:10px;
border: solid 1px;
min-height: 200px;
}
​
.book-image {
float: left;
margin-bottom: 10px;
margin-right: 20px;
width:100px;
}
​
.book-image img {
width: 100%;
object-fit: cover;
}
​
.book-info {
margin: 10px;
}

The end result looks something like this. Unfortunately, the images in the feed are tiny, but that’s ok, it doesn’t need to be huge.

Something I noticed along the way, I had initially been using the “all books” RSS feed, which meant it was giving all books on my profile, not JUST read books. I switched the RSS feed to “read” and things still worked, but “read” only returns a maximum of 200 books. Fortunately, I use shelves based on year for my books, so I can go through each shelf and pull out ALL the books I have read over the years.

Which leads me to a bit of a split in the process.

At some point, I’ll want to run this code, on a schedule somewhere, and have it check for newly read books (probably based on date), and post those as they are read.

But I also want to pull and post ALL the old reads, by date. These two paths will MOSTLY use the same code. For the new books, I’ll attach it to the “read” list, have it check the feed, then compare the date added in the latest entry, entry [0], to the current date. If it’s new, say, within 24 hours, it’ll post the book as a new post.

Change of plans. Rather than make individual posts, I’m going to just generate a pile of HTML code, and make backdated posts for each previous year. Much simpler and cleaner. I can then run the code once a year and make a new post on December 31st. Goodreads already serves the basic purpose of “book tracking”, I mostly just want an archive version. It’s also cleaner looking in the blog and means I don’t need to run the script all the time or have it even make the posts itself.

For the archive, I’ll pull all entries for each of my yearly shelves, then make a post for all of them, replacing the “published date” on each with the “date added” date. Because I want the entries on my Blog to match the (approximate) finished date.

I think, we’ll see.

I’ve decided to just strike out these changes of plans. After making the post, I noticed the date added, is not the date read. I know the yearly shelves are accurate, but the date added is when I added it, probably from some other notes at a later date. Unfortunately, the RSS feed doesn’t have any sort of entry for “Date Read” even though it’s a field you can set as a user, so I just removed it. It’s probably for the best, Goodreads only allows one “Date Read,” so any books I’ve read twice, will not be accurate anyway.

This whole new plan of yearly digests also means in the end I can skip step 3 above. I’m not making the script make the posts, I can cut and paste and make them manually. This lets me double-check things. One little bit I found, there was an artifact in the description of some brackets. I just added a string slice to chop it off.

I guess it’s a good idea to at some point mention the second of the two points I wanted to make here, about reusing code. Programming is all about reusing code. Your own code, someone else’s code, it doesn’t matter, code is code. There are only so many ways to do the same thing in code, they are all going to look generically the same. I picked out bits from that linked article and made them work for what I was doing, I’ll pick bits from my FreshRSS poster code, and clean it up as needed to work here. I’ll reuse 90% of the code here, to make two nearly identical scripts, one to run on a schedule, and one to be run several times manually. This also feeds back into point one, knowing what code you need and how to use it. Find the code you need, massage it together into one new block of code, and debug out the kinks. Wash, rinse, repeat.

The output is located here, under the Goodreads category.

Here is the finished complete script:

url = "GOODREADS URL HERE"
​
import requests
from bs4 import BeautifulSoup
​
book_list = []
​
def goodreads_rss(feed_url):
   try:
       r = requests.get(feed_url)
       soup = BeautifulSoup(r.content, features='lxml')
       books = soup.findAll('item')
       for a in books:
           # print(a)
           book_blob = a.find('description').text.split('<br/>')
           book_data = book_blob[0].split('\n                                     ')
           author = a.find('author_name').text
           isbn = a.find('isbn').text
           pages = a.find('num_pages').text
           desc = a.find('book_description').text[:-3]
           image = str(a.find('img'))
           title = str(image).split('"')[1]
           article = {
               'author': author,
               'isbn': isbn,
               'desc': desc,
               'title': title,
               'image': image,
               'pages': pages,
               'published': book_data[4].split(": ")[1],
               'my_rating': book_data[5].split(": ")[1],
               'date_read': book_data[7].split(": ")[1],
               'my_review': book_data[9].split(": ")[1],
               # Uncomment for debugging
               #'payload': book_data,
              }
           book_list.append(article)
       return book_list
   except Exception as e:
       print('The scraping job failed. See exception: ')
       print(e)
​
print('Starting scraping')
for_feed = goodreads_rss(url)
​
post_array = []
for each in for_feed:
   post = f'<div class="book-card"> <div> <div class="book-image">' \
          f'{each["image"]}' \
          f'</div> <div class="book-info"> <h3 class="book-title">' \
          f'{each["title"]}' \
          f'</h3> <h4 class="book-author">' \
          f'{each["author"]}' \
          f'</h4> <p class="book-details">' \
          f'Published: {each["published"]} | Pages:{each["pages"]}' \
          f'</p> <p class="book-review">'
   if each["my_rating"] != "0":
          post += f'My Rating: {each["my_rating"]}/5<br>'
   post+= f'{each["my_review"]}' \
          f'</div> </div> <div class="book-description"> <p class="book-summary">' \
          f'Description: {each["desc"]}' \
          f'</p> </div> </div>'
​
   print(post)
   post_array.append(post)

Next Door is Something Else

Social Networking is so bizarre on all the little niches that get built up. I’ve been involved in so many social networks over the years in some way, It’s interesting to watch them rise and fall and evolve, sometimes incredibly frustrating. sometimes too. I could definitely do without the TikTokification of LITERALLY EVERY social website. From LiveJournal to Myspace to Facebook. We all hop along chasing easy connections.

What bugs me is just how much they all try to be the same. The real obvious one I already mentioned, is that little row of circles at the top of the screen that leads to an endless path of random 15-second video clips. There is also the incredibly annoying “algorithmic feed” that everyone has. People have given up complaining about it these days, but I heard lots of “normal people” complaining about that one. Everything used to just be “everyone you actually follow, in revere chronological order”. You could scroll down to the last thing you saw, and know you were done.

Anyway, it seems weird to have all these social networks, but when they all stay more in their lane, they all serve good, different purposes. Part of it is about mindset. If I want to see photos, I used to go to Flickr, then I started going to Instagram. Now there isn’t really anywhere because Instagram is all TikTok videos. Threads is kind of more photos, but frankly, I am already tired of and done with Threads. I don’t need another Twitter replacement, I have Mastodon. Mastodon serves its purpose well, follow interesting nerdy types and make slightly shiposty posts.

I still keep up with Facebook, sort of. I’ve mostly used Facebook to follow family members, but in the last few years I’ve started branching out a bit into groups. I don’t really post much there at all though. I had ideas of posting more on Facebook via pages, but nobody ever gets shown pages unless the admin pays for placement as far as I can tell.

I used to be a pretty regular Reddit user but the API change killed access from 3rd party apps and the default interface is shit so I just, stopped visiting and posting completely. I’m actually surprised how easy it was. I use a lot of Discord, but that has a whole host of issues of its own, like how homogenous every server is and how notifications are impossible to use ever.

There are also more niche sort of social websites, and forums on specific topics, like the old days before Facebook. I don’t use them much but there are also tracking websites like Letterboxd, Last.fm, Goodreads, etc that all have their own little communities.

One that feels like it doesn’t come up in conversation much is Next Door. I didn’t even know this was a thing until I moved and got a random postcard. It’s essentially Facebook, but geographically focused. You are, by default, part of a “neighborhood”, but almost every post goes to “nearby neighbors” which as far as I can tell, encompasses my entire city. I actually do find it pretty useful, to point, but I also never every just browse it, I just, occasionally check notifications that come to my email.

Mostly because the actual interface is absolutely terrible. It’s like, an ad, every other post, maybe even more frequently. Also, the email notifications, tend to be useful posts, like, information from the City Offices, The posts you “miss”, are very low quality. At least in my area. They fall into a weird mix of categories.

  • I found someone’s dog/cat, with a photo.
  • Someone broke into my/a car.
  • Someone “suspicious” was walking through the neighborhood.
  • Someone posting a reply to another post as a main post, for some reason, how do you even fucking do that????
  • People advertising local services, usually handyman services or main services or transport services.

The really interesting ones are the “suspicious person” posts. They almost always have some night vision camera footage attached. I am convinced these are in fact, stealth advertisements. I became even more convinced because I looked up the address for one on Google Maps street view, and roamed up and down the street, for like an hour, and could not find any of the houses in the background of the video. It was a short street too, like two blocks long. \

The “broke into my car” is more often “I left my car unlocked.” Which is tragic but sorry, lock your car. The service advertisement ones always feel a bit shady because you know these people probably only take cash and aren’t running things as any sort of business.

Then there is that “Replies as posts.” A lot of Next Door really feels like “Old People Facebook” on steroids. Maybe it’s more just “Localized Facebook.” I will admit, I tried to follow a bunch of local news and pages once, and the comment sections were a complete cesspool of idiocy and arguments. Next Door is almost as bad, but not quite, because it’s used less, and the app seems to algorithmically dump posts that start to turn into shitty comment blackholes. I imagine people “follow up” less on Next Door as well.

There are still plenty of crazy nutballs, like this guy here, screeching about people being “woke”. I should add for context, this reply, was mad at the OP of a post concerned about traffic safety in an area where they are doing construction. It was a post that could have been made by anyone, regardless of political alignment, expressing actual, legitimate concerns.

Apparently, traffic safety is “woke” now? Hell if I know.

The real thing that this all confirms to me is, I don’t really care to know these random local neighbor folks at least 50% of the time. It’s much easier to make friends on other Social Media where there is a lot more ability to filter for interests and personality.