Blaugust 2023

Alanis Morissette – Jagged Little Pill

Here’s another album for the “This is already so popular” list of albums, Alanis Morissette’s Jagged Little Pill. Per Wikipedia, it’s the 13th biggest-selling album, ever, and the 3rd biggest put out by a woman. There is a good chance that you’ve at least heard a song from this album, somewhere. It’s an album that really sort of embodied a lot of the 90s feel at the time. It’s an album that I listened to a lot in High School and beyond, and it’s a strong strong contender for “Most listened to album”. I like to track music as much as possible these days with Last.fm, but there are a lot of gaps in that record, from the before times, and this is one of them. Others include Pink Floyd’s The Wall, Tom Petty and the Heartbreakers Greatest Hits, and probably a few Aerosmith albums.

Why cover this album now? Because in a few days, I’m going to see Alanis in concert at the Illinois State Fair. I don’t really have a “bucket list”, but if I did, going to an Alanis Morissette concert is one of them, even if it’s 25 years late. I have not really picked up on a lot of Alanis’ later music, though I want to. Supposed Former Infatuation Junkie is the only other album I have really listened to by her and it’s ok, but not quite as good as Jagged Little Pill. Something I wasn’t aware of until recently after watching the documentary Jagged, is that Alanis actually had a few albums before Jagged Little Pill that were essentially just regular boring pop music.

Which was part of what made this album blow up and become a huge hit. There was plenty of angry rock alternative music by dudes out there, but not a lot by women at the time. The whole album is this crazy ball of angry rage for a lot of its tracks. The first single from the album You Oughta Know has long been rumored to be about her former boyfriend Dave Coulier (Joey from Full House, the goofy guy) but it’s never been confirmed. With such lovely lyrics as

Cause the joke that you laid in the bed that was me
And I’m not gonna fade as soon as you close your eyes
And you know it
And every time I scratch my nails
Down someone else’s back
I hope you feel it
Well, can you feel it?

– You Oughta Know – Alanis Morissette

The lyrics in general are part of what really makes the album appealing. It’s all so poetically blunt at times, full of anger and trauma. It also becomes self-reflective and vulnerable in other places. It starts out very in your face with All I Really Want, You Oughta Know, and Right Through You. Even the slightly more subdued of the early tracks Perfect has a built to how it’s all just too much trying to be perfect. As the album goes on it becomes a lot more subdued, but still tells a string of stories about broken history and broken relationships.

Probably the most well-known track on the album is Ironic, which is an extremely popular and enjoyable song, but it’s also the subject of ridicule and jokes as most of the scenarios in the song are more straight tragic than actually ironic. Rain on your wedding day, ten thousand spoons when all you need is a knife, that sort of thing. The real Irony I suppose is a song called Ironic without any irony in it. I doubt it runs that deep though.

Probably my favorite tracks on the album are Hand in My Pocket and Mary Jane. I really like the whole building optimism of the former, and how it almost feels like it travels through stages of a life with it’s slightly evolving Chorus lyrics. Mary Jane is a nice slow ballad where Alanis really throws out those vocals.

This is also the other reason this album became so popular I think. It’s not just the lyrics, but the way they are delivered. No one thinks twice about scream-singing with male bands, but Alanis helped bring this concept to her music. She has a very distinctive almost yodeling screech at times in her voice which feels like it should be off-putting but instead, it just drives the whole energy of the album. It pushes the rage when needed. It pushes the 90s alternative “who gives a shit really?” vibe when needed. There is also a lot fo interesting almost folksy feeling to her tracks

There’s probably a reason Alanis Morissette never really ended up with a ton of staying power on her future works, because Jagged Little Pill just really embodied the times, and left an influential legacy on music, but released any other time, probably wouldn’t have even taken off at all. I definitely am not saying it’s a bad album, I am just saying that it probably just doesn’t resonate with people who weren’t there, so to speak.

Code Project – Goodreads RSS to HTML (Python)

I already have my Letterboxed watches set up to syndicate here, to this blog, for archival purposes mostly. When I log a movie in Letterboxed, a plug-in catches it from the RSS feed and makes a post. This is done using a plug-in called “RSS Importer” They aren’t the prettiest posts, I may look into adjusting the formatting with some CSS, but they are there. I really want to do the same for my Goodreads reading. Goodreads lists all have an RSS feed, so reason would have it that I could simply, put that feed into RSS Importer and have the same syndication happen.

For some reason, it throws out an error.

The feed shows as valid and even gives me a preview post, but for whatever reason, it won’t create the actual posts. This is probably actually ok, since the Goodreads RSS feed is weird and ugly. I’ll get more into that in a bit.

The Feed URL Is Here, at the bottom of each list.

I decided that I could simply, do it myself, with Python. One thing Python is excellent for is data retrieval and manipulation. I’m already doing something similar with my FreshRSS Syndication posts. I wanted to run through a bit of the process flow here though I used for creating this script. Partially because it might help people who are trying to learn programming and understand a bit more about how creating a program, at least a simple one, actually sort of works.

There were some basic maintenance tasks needing to be done. Firstly, I made sure I had a category on my WordPress site to accept the posts into. I had this already because I needed it trying to get RSS Importer to work. Secondly, I created a new project in PyCharm. Visual Studio Code works as well, any number of IDEs work, I just prefer PyCharm for Python. In my main.py file, I also added some commented-out bit at the header with URLs cut and pasted from Goodreads. I also verified these feeds actually worked with an RSS Reader.

For the actual code there are basically three steps to this process needed:

  • Retrieve the RSS feed
  • Process the RSS Feed
  • Post the processed data.

Part three here, is essentially already done. I can easily lift the code from my FreshRSS poster, replace the actual post data payload, and let it go. I can’t process data at all without data to process, so step one is to get the RSS data. I could probably work it out also from my FreshRSS script, but instead, I decided to just refresh my memory by searching for “Python Get RSS Feed”. Which brings up one of the two core points I want to make here in this post.

Programming is not about knowing all the code.

Programming is more often about knowing what process needs to be done, and knowing where and how to use the code needed. I don’t remember the exact libraries and syntax to get an RSS feed and feed it through Beautiful Soup. I know that I need to get an RSS feed, and I know I need Beautiful Soup.

My search returned this link, which I cribbed some code from, modifying the variables as needed. I basically skimmed through to just before “Outputting to a file”. I don’t need to output to a file, I can just do some print statements during debugging and then later it will all output to WordPress through a constructed string.

I did several runs along the way, finding that I needed to use lxml instead of xml in the features on the Beautiful Soup Call. I also opted to put the feed URL in a variable instead of directly in the code as the original post had it. It’s easy to swap out. I also did some testing by simply printing the output of “books” to make sure I was actually getting useful data, which I was.

At this point, my code looks something like this (not exactly but something like it:

import requests
from bs4 import BeautifulSoup
​
feedurl = "Goodreads URL HERE"
​
def goodreads_rss(feedurl):
   article_list = []    try:
       r = requests.get(feedurl)
       soup = BeautifulSoup(r.content, features='lxml')
       books = soup.findAll('item')                
       for a in books:
           title = a.find('title').text
           link = a.find('link').text
           published = a.find('pubDate').text            
           book = {
               'title': title,
               'link': link,
               'published': published
              }
           book_list.append(book)        
           return print(book_list)

print('Starting scraping')
goodreads_rss()
print('Finished scraping')

I was getting good data, and so Step 1 (above) was done. The real meat here is processing the data. I mentioned before, Goodreads gives a really ugly RSS feed. It has several tags for data in it, but they aren’t actually used for some reason. Here is a single sample of what a single book looks like:

<item>
<guid></guid>
<pubdate></pubdate>
<title></title>
<link/>
<book_id>5907</book_id>
<book_image_url></book_image_url>
<book_small_image_url></book_small_image_url>
<book_medium_image_url></book_medium_image_url>
<book_large_image_url></book_large_image_url>
<book_description>Written for J.R.R. Tolkien’s own children, The Hobbit met with instant critical acclaim when it was first published in 1937. Now recognized as a timeless classic, this introduction to the hobbit Bilbo Baggins, the wizard Gandalf, Gollum, and the spectacular world of Middle-earth recounts of the adventures of a reluctant hero, a powerful and dangerous ring, and the cruel dragon Smaug the Magnificent. The text in this 372-page paperback edition is based on that first published in Great Britain by Collins Modern Classics (1998), and includes a note on the text by Douglas A. Anderson (2001).]]&gt;</book_description>
<book id="5907">
<num_pages>366</num_pages>
</book>
<author_name>J.R.R. Tolkien</author_name>
<isbn></isbn>
<user_name>Josh</user_name>
<user_rating>4</user_rating>
<user_read_at></user_read_at>
<user_date_added></user_date_added>
<user_date_created></user_date_created>
<user_shelves>2008-reads</user_shelves>
<user_review></user_review>
<average_rating>4.28</average_rating>
<book_published>1937</book_published>
<description>
<img alt="The Hobbit (The Lord of the Rings, #0)" src="https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1546071216l/5907._SY75_.jpg"/><br/>
                                    author: J.R.R. Tolkien<br/>
                                    name: Josh<br/>
                                    average rating: 4.28<br/>
                                    book published: 1937<br/>
                                    rating: 4<br/>
                                    read at: <br/>
                                    date added: 2011/02/22<br/>
                                    shelves: 2008-reads<br/>
                                    review: <br/><br/>
                                    ]]&gt;
  </description>
</item>

Half the data isn’t within the useful tags, instead, it’s just down below the image tag inside the Description. Not all of it though. It’s ugly and weird. The other thing that REALLY sticks out here, if you skim through it, there is NO “title” attribute. The boot title isn’t (quite) even in the feed. Instead, it just has a Book ID, which is a number that, presumably, relates to something on Goodreads.

In the above code, there is a line “for a in books”, which starts a loop and builds an array of book objects. This is where all the data I’ll need later will go, for each book. in a format similar to what is show “title = a.find(‘title’).text”. First I pulled out the easy ones that I might want when later constructing the actual post.

  • num_pages
  • book_description
  • author_name
  • user_rating
  • isbn (Not every book has one, but some do)
  • book_published
  • img

Lastly, I also pulled out the “description” and set to work parsing it out. It’s just a big string, and it’s regularly formatted across all books, so I split it on the br tags. This gave me a list with each line as an entry in the list. I counted out the index for each list element and then split them again on “: “, assigning the value at index [1] (the second value) to various variables.

The end result is an array of book objects with usable data that I can later build into a string that will be delivered to WordPress as a post. The code at this point looks like this:

import requests
from bs4 import BeautifulSoup
​
url = "GOODREADS URL"
book_list = []
​
def goodreads_rss(feed_url):
   try:
       r = requests.get(feed_url)
       soup = BeautifulSoup(r.content, features='lxml')
       books = soup.findAll('item')
       for a in books:
           print(a)
           book_blob = a.find('description').text.split('<br/>')
           book_data = book_blob[0].split('\n                                     ')
           author = a.find('author_name').text
           isbn = a.find('isbn').text
           desc = a.find('book_description').text
           image = str(a.find('img'))
           title = str(image).split('"')[1]
           article = {
               'author': author,
               'isbn': isbn,
               'desc': desc,
               'title': title,
               'image': image,
               'published': book_data[4].split(": ")[1],
               'my_rating': book_data[5].split(": ")[1],
               'date_read': book_data[7].split(": ")[1],
               'my_review': book_data[9].split(": ")[1],
               # Uncomment for debugging
               #'payload': book_data,
              }
           book_list.append(article)
       return book_list
   except Exception as e:
       print('The scraping job failed. See exception: ')
       print(e)
​
print('Starting scraping')
for_feed = goodreads_rss(url)
for each in for_feed:
   print(each)

And a sample of the output looks something like this (3 books):

{'author': 'George Orwell', 'isbn': '', 'desc': ' When Animal Farm was first published, Stalinist Russia was seen as its target. Today it is devastatingly clear that wherever and whenever freedom is attacked, under whatever banner, the cutting clarity and savage comedy of George Orwell’s masterpiece have a meaning and message still ferociously fresh.]]>', 'title': 'Animal Farm', 'image': '<img alt="Animal Farm" src="https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1424037542l/7613._SY75_.jpg"/>', 'published': '1945', 'my_rating': '4', 'date_read': '2011/02/22', 'my_review': ''}
{'author': 'Philip Pullman', 'isbn': '0679879242', 'desc': "Can one small girl make a difference in such great and terrible endeavors? This is Lyra: a savage, a schemer, a liar, and as fierce and true a champion as Roger or Asriel could want--but what Lyra doesn't know is that to help one of them will be to betray the other.]]>", 'title': 'The Golden Compass (His Dark Materials, #1)', 'image': '<img alt="The Golden Compass (His Dark Materials, #1)" src="https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1505766203l/119322._SX50_.jpg"/>', 'published': '1995', 'my_rating': '4', 'date_read': '2011/02/22', 'my_review': ''}
{'author': 'J.R.R. Tolkien', 'isbn': '', 'desc': 'Written for J.R.R. Tolkien’s own children, The Hobbit met with instant critical acclaim when it was first published in 1937. Now recognized as a timeless classic, this introduction to the hobbit Bilbo Baggins, the wizard Gandalf, Gollum, and the spectacular world of Middle-earth recounts of the adventures of a reluctant hero, a powerful and dangerous ring, and the cruel dragon Smaug the Magnificent. The text in this 372-page paperback edition is based on that first published in Great Britain by Collins Modern Classics (1998), and includes a note on the text by Douglas A. Anderson (2001).]]>', 'title': 'The Hobbit (The Lord of the Rings, #0)', 'image': '<img alt="The Hobbit (The Lord of the Rings, #0)" src="https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1546071216l/5907._SY75_.jpg"/>', 'published': '1937', 'my_rating': '4', 'date_read': '2011/02/22', 'my_review': ''}

I still would like to get the Title, which isn’t an entry, but, each Image, uses the Book Title as its alt text. I can use the previously pulled-out “image” string to get this. The image result is a complete HTML Image tag and link. It’s regularly structured, so I can split it, then take the second entry (the title) and assign it to a variable. I should not have to worry about titles with quotes being an issue, since the way Goodreads is sending the payload, these quotes should already be removed or dealt with in some way, or the image tag itself wouldn’t work.

title = str(image).split('"')[1]

I’m not going to go super deep into the formatting process, for conciseness, but it’s not really that hard and the code will appear in my final code chunk. Basically, I want the entries to look like little cards, with a thumbnail image, and most of the data pulled into my array formatted out. I’ll mock up something using basic HTML code independently, then use that code to build the structure of my post string. It will look something like this when finished, with the variables stuck in place in the relevant points, so the code will loop through, and insert all the values:

post_array = []
for each in for_feed:
   post = f'<div class="book-card"> <div> <div class="book-image">' \
          f'{each["image"]}' \
          f'</div> <div class="book-info"> <h3 class="book-title">' \
          f'{each["title"]}' \
          f'</h3> <h4 class="book-author">' \
          f'{each["author"]}' \
          f'</h4> <p class="book-details">' \
          f'Published: {each["published"]} | Pages:{each["pages"]}' \
          f'</p> <p class="book-review">'
   if each["my_rating"] != "0":
          post += f'My Rating: {each["my_rating"]}/5<br>'
   post+= f'{each["my_review"]}' \
          f'</div> </div> <div class="book-description"> <p class="book-summary">' \
          f'Description: {each["desc"]}' \
          f'</p> </div> </div>'
​
   print(post)
   post_array.append(post)

I don’t use all of the classes added, but I did add custom classes to everything, I don’t want to have to go back and modify my code later if I want to add more formatting. I did make a bit of simple CSS that can be added to the WordPress custom CSS (or any CSS actually, if you just wanted to stick this in a webpage) to make some simple cards. They should center in whatever container they get stuck inside, in my case, it’s the WordPress column.

.book-card {
background-color: #DDDDDD;
width: 90%;
margin: 20px auto;
padding:10px;
border: solid 1px;
min-height: 200px;
}
​
.book-image {
float: left;
margin-bottom: 10px;
margin-right: 20px;
width:100px;
}
​
.book-image img {
width: 100%;
object-fit: cover;
}
​
.book-info {
margin: 10px;
}

The end result looks something like this. Unfortunately, the images in the feed are tiny, but that’s ok, it doesn’t need to be huge.

Something I noticed along the way, I had initially been using the “all books” RSS feed, which meant it was giving all books on my profile, not JUST read books. I switched the RSS feed to “read” and things still worked, but “read” only returns a maximum of 200 books. Fortunately, I use shelves based on year for my books, so I can go through each shelf and pull out ALL the books I have read over the years.

Which leads me to a bit of a split in the process.

At some point, I’ll want to run this code, on a schedule somewhere, and have it check for newly read books (probably based on date), and post those as they are read.

But I also want to pull and post ALL the old reads, by date. These two paths will MOSTLY use the same code. For the new books, I’ll attach it to the “read” list, have it check the feed, then compare the date added in the latest entry, entry [0], to the current date. If it’s new, say, within 24 hours, it’ll post the book as a new post.

Change of plans. Rather than make individual posts, I’m going to just generate a pile of HTML code, and make backdated posts for each previous year. Much simpler and cleaner. I can then run the code once a year and make a new post on December 31st. Goodreads already serves the basic purpose of “book tracking”, I mostly just want an archive version. It’s also cleaner looking in the blog and means I don’t need to run the script all the time or have it even make the posts itself.

For the archive, I’ll pull all entries for each of my yearly shelves, then make a post for all of them, replacing the “published date” on each with the “date added” date. Because I want the entries on my Blog to match the (approximate) finished date.

I think, we’ll see.

I’ve decided to just strike out these changes of plans. After making the post, I noticed the date added, is not the date read. I know the yearly shelves are accurate, but the date added is when I added it, probably from some other notes at a later date. Unfortunately, the RSS feed doesn’t have any sort of entry for “Date Read” even though it’s a field you can set as a user, so I just removed it. It’s probably for the best, Goodreads only allows one “Date Read,” so any books I’ve read twice, will not be accurate anyway.

This whole new plan of yearly digests also means in the end I can skip step 3 above. I’m not making the script make the posts, I can cut and paste and make them manually. This lets me double-check things. One little bit I found, there was an artifact in the description of some brackets. I just added a string slice to chop it off.

I guess it’s a good idea to at some point mention the second of the two points I wanted to make here, about reusing code. Programming is all about reusing code. Your own code, someone else’s code, it doesn’t matter, code is code. There are only so many ways to do the same thing in code, they are all going to look generically the same. I picked out bits from that linked article and made them work for what I was doing, I’ll pick bits from my FreshRSS poster code, and clean it up as needed to work here. I’ll reuse 90% of the code here, to make two nearly identical scripts, one to run on a schedule, and one to be run several times manually. This also feeds back into point one, knowing what code you need and how to use it. Find the code you need, massage it together into one new block of code, and debug out the kinks. Wash, rinse, repeat.

The output is located here, under the Goodreads category.

Here is the finished complete script:

url = "GOODREADS URL HERE"
​
import requests
from bs4 import BeautifulSoup
​
book_list = []
​
def goodreads_rss(feed_url):
   try:
       r = requests.get(feed_url)
       soup = BeautifulSoup(r.content, features='lxml')
       books = soup.findAll('item')
       for a in books:
           # print(a)
           book_blob = a.find('description').text.split('<br/>')
           book_data = book_blob[0].split('\n                                     ')
           author = a.find('author_name').text
           isbn = a.find('isbn').text
           pages = a.find('num_pages').text
           desc = a.find('book_description').text[:-3]
           image = str(a.find('img'))
           title = str(image).split('"')[1]
           article = {
               'author': author,
               'isbn': isbn,
               'desc': desc,
               'title': title,
               'image': image,
               'pages': pages,
               'published': book_data[4].split(": ")[1],
               'my_rating': book_data[5].split(": ")[1],
               'date_read': book_data[7].split(": ")[1],
               'my_review': book_data[9].split(": ")[1],
               # Uncomment for debugging
               #'payload': book_data,
              }
           book_list.append(article)
       return book_list
   except Exception as e:
       print('The scraping job failed. See exception: ')
       print(e)
​
print('Starting scraping')
for_feed = goodreads_rss(url)
​
post_array = []
for each in for_feed:
   post = f'<div class="book-card"> <div> <div class="book-image">' \
          f'{each["image"]}' \
          f'</div> <div class="book-info"> <h3 class="book-title">' \
          f'{each["title"]}' \
          f'</h3> <h4 class="book-author">' \
          f'{each["author"]}' \
          f'</h4> <p class="book-details">' \
          f'Published: {each["published"]} | Pages:{each["pages"]}' \
          f'</p> <p class="book-review">'
   if each["my_rating"] != "0":
          post += f'My Rating: {each["my_rating"]}/5<br>'
   post+= f'{each["my_review"]}' \
          f'</div> </div> <div class="book-description"> <p class="book-summary">' \
          f'Description: {each["desc"]}' \
          f'</p> </div> </div>'
​
   print(post)
   post_array.append(post)

Next Door is Something Else

Social Networking is so bizarre on all the little niches that get built up. I’ve been involved in so many social networks over the years in some way, It’s interesting to watch them rise and fall and evolve, sometimes incredibly frustrating. sometimes too. I could definitely do without the TikTokification of LITERALLY EVERY social website. From LiveJournal to Myspace to Facebook. We all hop along chasing easy connections.

What bugs me is just how much they all try to be the same. The real obvious one I already mentioned, is that little row of circles at the top of the screen that leads to an endless path of random 15-second video clips. There is also the incredibly annoying “algorithmic feed” that everyone has. People have given up complaining about it these days, but I heard lots of “normal people” complaining about that one. Everything used to just be “everyone you actually follow, in revere chronological order”. You could scroll down to the last thing you saw, and know you were done.

Anyway, it seems weird to have all these social networks, but when they all stay more in their lane, they all serve good, different purposes. Part of it is about mindset. If I want to see photos, I used to go to Flickr, then I started going to Instagram. Now there isn’t really anywhere because Instagram is all TikTok videos. Threads is kind of more photos, but frankly, I am already tired of and done with Threads. I don’t need another Twitter replacement, I have Mastodon. Mastodon serves its purpose well, follow interesting nerdy types and make slightly shiposty posts.

I still keep up with Facebook, sort of. I’ve mostly used Facebook to follow family members, but in the last few years I’ve started branching out a bit into groups. I don’t really post much there at all though. I had ideas of posting more on Facebook via pages, but nobody ever gets shown pages unless the admin pays for placement as far as I can tell.

I used to be a pretty regular Reddit user but the API change killed access from 3rd party apps and the default interface is shit so I just, stopped visiting and posting completely. I’m actually surprised how easy it was. I use a lot of Discord, but that has a whole host of issues of its own, like how homogenous every server is and how notifications are impossible to use ever.

There are also more niche sort of social websites, and forums on specific topics, like the old days before Facebook. I don’t use them much but there are also tracking websites like Letterboxd, Last.fm, Goodreads, etc that all have their own little communities.

One that feels like it doesn’t come up in conversation much is Next Door. I didn’t even know this was a thing until I moved and got a random postcard. It’s essentially Facebook, but geographically focused. You are, by default, part of a “neighborhood”, but almost every post goes to “nearby neighbors” which as far as I can tell, encompasses my entire city. I actually do find it pretty useful, to point, but I also never every just browse it, I just, occasionally check notifications that come to my email.

Mostly because the actual interface is absolutely terrible. It’s like, an ad, every other post, maybe even more frequently. Also, the email notifications, tend to be useful posts, like, information from the City Offices, The posts you “miss”, are very low quality. At least in my area. They fall into a weird mix of categories.

  • I found someone’s dog/cat, with a photo.
  • Someone broke into my/a car.
  • Someone “suspicious” was walking through the neighborhood.
  • Someone posting a reply to another post as a main post, for some reason, how do you even fucking do that????
  • People advertising local services, usually handyman services or main services or transport services.

The really interesting ones are the “suspicious person” posts. They almost always have some night vision camera footage attached. I am convinced these are in fact, stealth advertisements. I became even more convinced because I looked up the address for one on Google Maps street view, and roamed up and down the street, for like an hour, and could not find any of the houses in the background of the video. It was a short street too, like two blocks long. \

The “broke into my car” is more often “I left my car unlocked.” Which is tragic but sorry, lock your car. The service advertisement ones always feel a bit shady because you know these people probably only take cash and aren’t running things as any sort of business.

Then there is that “Replies as posts.” A lot of Next Door really feels like “Old People Facebook” on steroids. Maybe it’s more just “Localized Facebook.” I will admit, I tried to follow a bunch of local news and pages once, and the comment sections were a complete cesspool of idiocy and arguments. Next Door is almost as bad, but not quite, because it’s used less, and the app seems to algorithmically dump posts that start to turn into shitty comment blackholes. I imagine people “follow up” less on Next Door as well.

There are still plenty of crazy nutballs, like this guy here, screeching about people being “woke”. I should add for context, this reply, was mad at the OP of a post concerned about traffic safety in an area where they are doing construction. It was a post that could have been made by anyone, regardless of political alignment, expressing actual, legitimate concerns.

Apparently, traffic safety is “woke” now? Hell if I know.

The real thing that this all confirms to me is, I don’t really care to know these random local neighbor folks at least 50% of the time. It’s much easier to make friends on other Social Media where there is a lot more ability to filter for interests and personality.

Blaugust 2023 – Introduction Week

There isn’t a hard schedule or any real requirements for Blaugust, it’s literally just an excuse that people can use to write blog posts. The timing feels really great though considering social media seems to be starting to collapse on itself and maybe people are more primed for “personal blogging”. Week two is “introduction week”. Since the month started last week on Tuesday and my post then pretty much met the “Welcome to Blaugust” theme, I figure today I could go ahead and keep up with that with an introduction post.

I’ve written plenty of introductions in the past, and a lot of this all may already be covered on my About Page, if it’s up to date. I tend to get a bit rambling on these sorts of things so I’ll do my best to keep it down, it just feels like there is a lot to cover and a lot of it could easily be it’s own blog post or blog post series.

I am, Josh Miller, aka Ramen Junkie. I like to push the “fun fact” that I have been “Ramen Junkie” for more of my life than I have not been Ramen Junkie. People often shorten it to simply “Ramen”. I’ve been called “Ramen” in real life by real people in person. I originally started using the name online back in the late 90s on Usenet, most often in alt.games.final-fantasy or alt.toys.transformers, but elsewhere on Usenet as well, I was really big on Usenet back in the day. I am also the duly elected “Supreme Dictator for Life” of alt.games.final-fantasy.

I have run into a few other Ramen Junkies, though the only two notable ones are on Something Awful’s forums, that isn’t me. And more notable, is Ivan Orkin, who runs several actual Ramen shops and is RamenJunkie on Instagram, and I think X-box. I say X-box because when I tried to reset the password, the domain attached was clearly “ramenjunkie.com” and dude has that domain. If I ever get out to New York, i hope to go to his place there and maybe meet him. Just for fun.

I was going to move on, but I wanted to also add that the name has several sorts of origins, like a Marvel hero. One, Ramen is “cheap hacker food”, so being a ramen junkie seemed cool at the time. I also do like Ramen, and regularly make ramen for meals, both instant (of a wide variety, I like trying new ones) and more homemade with custom broth and noodles and ingredients. The other (true) origin was that I wanted to make a slightly more “trollish” name on Usenet, at the time I had been posting as Lord Chaos, and part of why I picked “Ramen” was because many people who were not Japanese would post using Japanese names from anime. Ramen was an intentionally shitty take on people with usernames like “Shinji” and “Tenchi” and “Usagi”.

Anyway, it’s just sort of stuck.

So now, moving on, the blog itself. I’ve always enjoyed just, writing. My first real website was on Geocities, and though it didn’t have a proper “blog engine”, it has a “blog format” made with manually coded and edited HTML pages. For a while I was using SHTML, which had a mechanism to embed a header and footer page so things could be universal across the site. That first site was The Chaos Xone (The X is for Xtreem!), because in the 90s, Xs were cool. Unlike some folks, I eventually moved on from the cool X. For while I had a second, fairly popular website called “The Geocities Pokemon Center” with all you needed to know about Pokemon Red and Blue.

From Geocities I went to Livejournal, and the name “Lameazoid“, a name chosen because it sounded a bit like “Freakazoid” but also had a sort of retro and self-depreciating undertone to it. For a while I also had a site hosted on my college’s computer hosting, because they provided web space to students, this was sort of the start of a split between having a personal blog and a fun blog. Eventually I landed on WordPress.com, and later with self hosting WordPress on a server of my own with its own domain name, at Lameazoid.com.

Lameazoid kind of stalled out for me, partially because I kept trying to make it more structured. I wanted someplace to put “personal blogs” that didn’t quite fit the “theme” of Lameazoid. At one point I had all my blogging there, but basically, I wanted a personal blog again, where nothing mattered and I could just, post what I wanted. That ended up becoming this blog, [Blogging Intensifies], which has stuck pretty well. It’s a play on the meme [XXXXX Intensifies], which is why it gets stylized with the brackets. I sometimes abbreviate it as [BI]

I’ve always had side blogs, and I have a bad habit of starting up new blogs and then just, folding the content elsewhere and deleting the blog. Sometimes I used WordPress, and sometimes I would try Blogger. Here is a list, of what I can remember, with the name, general content, and where the posts ended up, if anywhere.

  • Livejournal – General posting, Games, Reviews, etc – Ended up all over, though mostly archived due to low quality
  • Snapshot of the Mind – Personal blog on WordPress – Archived or maybe on [BI]
  • Pen to Paper – A writing-focused blog on WordPress – Archived mostly.
  • My BLARG – Shitposting blog on WordPress – Archived
  • OSAF (Opinion Stated as Fact) – A political commentary blog – Most of this has been archived because those opinions were shitty

I know there are more. I probably have a list somewhere.

I suppose more important for an introduction post, is interests. It would almost be easier to list what I am NOT interested in (sportsball) than what I am interested in. Lameazoid has always primarily been about my interest in video games and toys, with a side of comics and movies sprinkled in. My main hobbies are playing video games, of all sorts, and collecting toys, of all sorts. I also like to just create, in general, which is where a lot of the other content comes from, and these days, gets thrown here, on [BI]. Mostly these days I create code, but I also used to write a few stories and draw some, and I actually kind of want to get back to that at some point. I imagine I’m a bit rusty these days, it’s been probably 20 years since I did any of that seriously. I like taking photos and like to think I am ok-ish at it.

I also have always enjoyed music, though more recently, I’ve actually been more openly EXPLORING music, which is why currently this blog gets a fair amount of music-related content. Because it’s one of my current passion interests.

I also enjoy learning and trying new things. I learn to code (constantly). I learn languages (Spanish and Norwegian currently). I am trying to learn the Piano (and absolutely 1000% failing at actually trying). I want to get back into drawing and learn to get better at it. This is all content that sometimes I will talk about here, mostly, as i mentioned, it’s a place where there are no stakes and I can just write about whatever.

It occurs to me that I have written an awful lot about myself, but not a lot about ME. I actually don’t often but for the sake of completionism. I’m (as of this post) 43 years old. I live in a small city in the middle of Illinois. I’m married, going on 16 years now, and I have 3 stepkids who are all adults now, though they still live with us. They all have some sort of (varying) health issues, and my wife used to blog about it herself on her own site. I work in technology for a large company, and have worked on the back-end technical part of Television for around 18 years now. I have a degree in Mechanical Engineering, though I had considered Journalism.

I am generally a bit of a “jack of all trades” type, pretty good at a lot, master of little, that sort of thing.

Music Monday – Featuring Aurora Edition

So I’ve been doing album write ups on Fridays, but sometimes I do still listen to singles and one offs and songs not on albums. So here’s a new series, starting here in Blaugust, where instead of doing albums, I just talk about videos and songs I enjoy. Also I’m calling it “Music Monday” because it’s not very original and I am a sucker for alliteration.

The Chemical Brothers – Eve of Destruction

Look at me, cheating already by just doing Aurora videos (well not all of them). I once made a comment on the Aurora Discord that my “favorite Aurora song isn’t even an Aurora song.” That’s not really true, though this track is pretty awesome. This is definitely my favorite Aurora video. It’s possibly my favorite video in general actually, there is just so much appealing here.

There’s Aurora, so that’s a bonus. But it also has this amazing cheesy Super Sentai thing going on. What is Sentai? Well it’s…. what’s in this video. It’s popular and originated in Japan. Thought the easiest example for most people is, Power Rangers, which is a remixed and American-ized version of the series Kyōryū Sentai Zyuranger.

The Chemical Brothers – Wide Open

This is such an interesting and fascinating video to watch, though I also do like the song. It’s very simple on the surface, a long shot of a young woman dancing around a room. But the effect of the vanishing body parts is really quite interesting and makes for a neat effect. The effect itself is actually pretty complex and the behind the scenes mentions they created special software to achieve it. Essentially they scanned her body to create a matching 3D model. Then she does the long take dance. The software extrapolates the entire scene out using LIDAR so they can sort of, recreate everything as a 3D model, which allows for the body replacement effect, that matches her movements.

It’s also kind of spooky when the video passes by the mirror and she’s watching herself, though I’m not real sure why.

Qing Feng Wu, AURORA – Storm (English Version)

I actually keep forgetting this collaboration exists, which is a shame because it’s really good. Qing Feng Wu fits well with Aurora too, better than the previous collab she did with Sub Urban. I have no idea if Qing Feng Wu’s music is normally this stylistic or not (a brief sampling suggests it is), but the ethereal nature of his voice and the natural themes and feel of the video are definitely right up there for Aurora.

There is also a version with some lyrics in Chinese.