Ramen Junkie

Amazon Music Now Sucks Completely

Amazon Echo, probably the easiest and simplest way to listen to music I have, had just completely shat itself and it’s so incredibly frustrating.  I don’t understand why companies are so dead set on ruining every product they make (Oh wait, I do, something something endless growth bull shit, but that’s a rant for elsewhere).  I don’t care for Ala-carte subscription based music at all.  Artists general don’t get paid enough cut, and I like being able to listen to my music years to decades later, without paying monthly, endlessly.  I still own every CD I have ever bought.  Granted, I do buy some of my CDs used so the artist still gets nothing from that so I guess that’s mildly hypocritical.   Here’s an article about the change.

https://variety.com/2022/digital/news/amazon-music-prime-100-million-songs-shuffle-mode-podcasts-ad-free-1235416844/

My point is, I buy music digitally.  Hell, I bought a couple of albums just last Friday for #BandCampFriday.  I have literally hundreds of albums and thousands of trcks PURCHASED through Amazon Music.  I’ve been buying music through Amazon for over a decade.  More recently I shifted away a bit because I wanted to get FLAC options but when I can’t find FLAC I still buy from Amazon because the quality offered is still a pretty good MP3.  This made the Echo and Alexa super great because I could just tell it, “Alexa, play Aurora from my library”, and it would play my tracks, or whatever artist or album I wanted.

I even have an echo hooked to my sound system down in the basement so it sounds better.

But this recent change.  This isn’t even an option for music I have purchased.  Like now, as I type this, I played my album, “A Different Kind of Human”.  And it’s just randomly inserted some other random track in.  It’s also playing the tracks out of order.  I even tried playing it from the Alexa App on my phone.  Maybe it works better in the Amazon Music app but I had to dump that because it kept asking EVERY SINGLE TIME I opened it to subscript to Music Unlimited.

Which is the goal with this new Shuffle Play idiocy.  If you subscribe up, you can get all the old features back.  Which I have no interest in.  I own my music, and I already pay too much for Amazon Prime, which is going up AGAIN next year.  I don’t even have it on Auto Renew anymore because it’s like $150/year now and twice as much as when I started subscribing.  I really just want to pay $60/year or whatever for easier shipping.  I don’t care about all this extra nonsense.

I wouldn’t even mind this annoying change as much if it just worked with music I’ve purchased, but it doesn’t.

Looking for some answers or suggestions on fixing this around the web says that a lot of people are upset about this. Not just because of situations like mine where you can’t listen to your own music, but because of things like custom routines and playlists being completely broken as well. A couple that stood out to me were people who had routines set up for their kids to go to sleep listening to particular songs using playlists and timers. Another was how it completely breaks soundtracks to musicals, and I imagine it breaks other similar content with a need for linear tracks that run together like Stand Up Comedy and Dance mixes.

It all feels incredibly poorly thought out and frankly, it doesn’t drive me to upgrade my subscription (which is the point) but makes me want to find a different smart speaker solution that works better with my local network shares. If I were going to subscribe to music, it would be Spotify or Tidal over Amazon.

FreshRSS and RSS Feed Posts

Keen observers (ha ha ha no one reads this), might have noticed that a few posts of links showed up in the feed.  These are basically, stories I read in my RSS reader that I found interesting, and wanted to share, or at least, keep track of.  The posts as of now are a little ugly, and I’ll probably clean up the formatting over time, but I wanted to go ahead and write a bit about the process.  I’ll have the Code on Github at some point.

As for the factors, firstly, this is something I’ve wanted to have on my blog for a while.  Like a long while.  I might even try to see if there are ways to better slit up the links by topic later.  A fair number of blogs I subscribe to have these sort of link digest posts, and I’ve always just liked the idea.  It’s also good for personal reference to when I may have read something.  It is limited as it only comes from y RSS Reader.

Speaking of my RSS Reader.  I’ve moved on from TinyTinyRSS, for a few reasons.  One, the interface is a little meh, honestly.  Maybe the newer version is better but it’s only available in Docker, and Docker is such a PItA to use.  Also, while looking for alternatives, it sounds like the folks who make TTRSS are kind of a bunch of gatekeeping jerk types, and I’d rather not support that.  I also find the need to keep the update daemon running with Screen to be a pain.  So I’ve moved over to FreshRSS, which I just run locally on a Raspberry Pi.  I may move it to a publicly accessibly machine at some point, but I am not entirely convinced that TT-RSS wasn’t the entry point for my previous server malware woes.

So, like TT-RSS, Fresh RSS has a way to get an RSS feed out of your Favorited posts.  In the past I’ve used tools like IFTTT to automate posting these links around, but I don’t use IFTTT anymore for reasons I’m not going into.  Fortunately, I’ve been working to become a pretty good Python coder for the last month or so.  So instead I wrote a script.  

It’s not even a particularly complicated script.  There are only two things it really needs to do, get new articles, and then post them to WordPress. Since the script runs locally, on the same Raspberry Pi even, it easily can reach and pull the RSS feed.  One nice thing I noticed with Fresh RSS, the feed included a time interval, so just getting new posts was super simple, because the interval is just “24” for “24 hours”.  The script eventually will run on a cronjob at the exact same time daily.  Anyway, after pulling the RSS, the entries are already in an easily usable Dictionary.  which gets fed into the construction of the WordPress Post.

def get_feed(feed_url):
    NewsFeed = feedparser.parse(feed_url)
    return NewsFeed

The posting part was pretty easy as well, WordPress has an API, and Python also has a library that can use that API.  It just needs some log in information and a post payload to send.  

def make_post(NewsFeed):
    wp = Client(f'https://{wp_url}/xmlrpc.php', wp_user, wp_pass)
    post = WordPressPost()
    post.title = f"{cur_date} - Link List"
    post.terms_names = {'category': ['Link List'], 'post_tag': ['links', 'FreshRSS']}
    post.content = f"<p>Blogging Intensifies Link List for {cur_date}</p>"
    for each in NewsFeed.entries:
        post.content += f'{each.published[5:-15].replace(" ", "-")} - <a href="{each.links[0].href}">{each.title}</a></p>'

The trickiest part was formatting the date a bit prettier.  I mentioned cleaning up the formatting a bit, I’m thinking maybe a simple invisible table, so the date and the links don’t wrap oddly like they do now.   i also added a check that if there are no new favorited posts, it will skip making a post.  Otherwise I’ll end up with empty posts on days I forget to check my feed reader

While writing the script, at first I was just outputting a text copy of the post to the console until satisfied.  Eventually, I pushed out a real post, then verified that things worked.  The next day, was just a straight test by opening the project, then running it again.  The third day, I copied the files and installed the lobraries needed, then posted from the Pi.  Phase 4 of this will be to set up Cron to run it automatically.  If that works then it will certainly, “just run” for the foreseeable future.

100 Days of Python, Projects 54-57 #100DaysofCode

Back to web development again, but with a different twist this time.  Instead of scraping things, we’re learning Flask, to produce little Python based Websites.  In doing these exercises, I find I am kind of wondering why one would use Python over say, Apache, or NGINX or even IIS.  I can sort of see where it’s useful, and maybe later we will get to more of it’s usefulness.  My primary issue is that the HTML code part of it ends up being VERY specifically Flask based.  Like flask looks for images and CSS in specific folders.  Plus if you use any sort of variables, they all get passed to the HTML in a very particular way.

I had considered that it might be useful for sharing some of the code I have written through my web server, but in my research, things like Tkinter and Turtle don’t work at all through Flask.  I was kind of hoping it was smart enough to produce little Browser pop ups or something to render the graphics out.

This section isn’t super complex so far, but it wraps up the Intermediate+ section with a little interlude for Bootstrap in between, so I figure it’s a good little chunk to keep in it’s own write up.

As usual, the code is all on Github.

Day 54 – Intro to Flask

There was literally no project today really.  We created a basic “Hello World” Flask server, then created some Decorator Functions.  It was interesting, but not really that exciting to write up.  I do somewhat question the usefulness of a Decorator a bit, versus just having a function that takes an input and modifies it directly.

## Day 55 – Higher Lower Game Returns

The Day 55 Lessons were a bit better.  We covered Decorators a bit more and how to handle URLs in Flask, which brings me back to the “Is this better” I mentioned in the opening, since once again, the code will get weird to use outside of Flask.

I had a lot of fun with the project though.  It’s a web version of the “Higher-Lower” Game from way back on Day 14.  You Pick a number, it tells you if it’s higher or lower, only with web pages.  It was essentially a way to learn about using Dynamic URLs in Flask, but spiced up for fun.  I added a nav bar to mine so the user didn’t have to type a URL and could just click the next number to guess.  I also used a bunch of silly GIFs from my favorite musicians instead of Cat GIFs on each page.  

It’s kind of useless, but it was fun to build.

Day 56 – Personal Website

This day was mostly about how to quickly import existing code to Flask.  It involved a couple of practice projects and a “real” project.  The first Practice was taking the Lesson 41-44 website and importing it to Flask.

The second practice was to use someone else’s template and import it to Flask, as well as modifying and simplifying that code.

The final project was to build a simple “Name Card” website with some social links.  Essentially, it was a repeat of the second practice, but actually replacing images and information.  I kind of prefer the previously made CV website and it’s easier to hose on the web so I’m going to stick with that for now.

Day 57 – Blog Capstone Project Part 1

This project picks up in Day 59 with the start of the Advanced Section of the course.  The basic idea here was to build a simple blog interface that would read some generic JSON Posts and display them, and then let users click into each blog post to read more.

I’m particularly proud of my result, which only uses one HTML file, that varies if the user clicked on a blog post or not.  I feel like it was a pretty slick solution.  The starter files also included a file to make a “Post class”.  Using this class was not part of the assignment, but I suspect it will come up later, so I went ahead and built it, though I didn’t use it to read the blog posts.

If this comes out alright, I may actually use it somewhere, I’ve been looking for something to put on Joshmiller.net.  Though I also don’t really NEED another Blog outlet.  I barely maintain the one I regularly use now.

Sorting Out all My Writing

Coding Python isn’t the only project I’ve been working on recently, though it IS the major one.  Another project I’ve been working on, that is at least tangential to “modernizing how I code” is organizing all of my writing.  I write a LOT.  I sometimes list “writing” as a hobby, but I almost never list it as a “Primary Hobby” but it’s arguably the one hobby I have done the longest, even longer than collecting toys, and that I would like to think I do, pretty well.  Ok, no scratch that, I’ve been a “Gamer” since before I could really write.  Actually, it seems like all of my “major hobbies” started when I was like 5-10, so I guess those “formative years” really do matter.  My first programming was on the family’s old Franklin PC with two 5/25 floppy drives, writing BASIC that my dad had taught me.  He had been going to college for Computer Science at the time.

Anyway, writing.

I write, a lot.  I write about all sorts of topics.  Sometimes I write technical write ups, sometimes I write (purposely) shitty Final Fantasy VII Fan Fiction. I write casual blog posts about music, and movies and toys, I write detailed instructions for work or FAQs for Video Games. They aren’t all “winners” but I have gotten a lot of compliments of the years for my writing style and methods.  i also save everything.  I mean, literally EVERYTHING I create.  There are a few things I no longer have and I still think about them sometimes, and wish I had copies.  A few years ago I even started transposing some of my old paper journals and stories into digital text.  

The end result is that I have a lot of files in a lot of formats. Some are text files, some are Word Files, some are exported XML archive files.  A few are PDF based exports as well as some olf “Windows Live Writer” files.

As part of my personal journey to “level up” a bit on my computer skills (which are already pretty great), I have been working on getting more accustom to using Markdown.  Markdown is essentially “Fancy Text Files”. They are plain text files, with special symbols inserted occasionally to make things look prettier in a Markdown reader.  The thing is, this means they are very compact in size and can still be read by even the most basic reader (albeit with the random symbols inserted sometimes).

Most of this effort involves a LOT of copy and pasting.  I’ve converted a bunch of Word Docs I had over to Markdown files. Text docs aren’t generally huge to start with, but the Markdown files mean files that are sometimes 1/4th the file size.  When we are talking hundreds to thousands of files, this is significant savings.  So far, I’ve been skipping reviews if they have embedded images, but I already have those images saved elsewhere, so I may revisit that concept.

This also means finally sorting through some other “to sort” boxes.  For example, for a while, I was posting blog posts with Microsoft’s now discontinued “Windows Live Writer”.  The shitty part is, it used a proprietary format that even Word can’t open.  Fortunately, there is a open source alternative, “Open Live Writer”.  I don’t use it to post, but I can open those old Live Writer Files and convert them to useful Markdown Files.

One fun thing I did was export all of my Reddit Posts, and pull out anything over 500 characters as a “Journal Entry”.

Another source is old WordPress Exports. I have used my newfound l33t Pythonista Skills to build a sweet little script that takes a WordPress XML export, and parses through it for dates, titles, and content. Next, it cleans up the post content a bit (it’s not perfect sadly), and spits it all out to a series of files in the format I want.This script could easily be modified to work with other similar data exports like Reddit)

That code can be found over on Github. It’s probably buggy, but it works for the most part.

Which brings up sorting.  I have posted a few times about digital organization, and I’ve gotten the text down to a science as well.  A folder called “Journal” in my One Drive, which syncs to several PCs and my NAS.  Inside it’s sorted by year, inside each year are files in YYYY.MM.DD – TOPIC.md.  I’ve also incorporated this into my blogging workflow, and so partially written posts in the current year get X_ added to the front, so they all sort to the bottom, but I have an idea of when I had the idea.

This whole new system also allows me an easy way to just Journal occasionally.  One thing I’ve been trying to work on is that “not everything has to be a blog post”.  Sometimes it’s good to just, write, for myself, date it, and spit it out.

It’s healthy to get those thoughts out sometimes. For example, would you like to know how many times I’ve randomly bitched about the show Glee over the past 10-15 years?  Because it’s more than is probably healthy.

Anyway, this project is still a work in progress, but I’ve made a LOT of progress and I’m pretty happy with how it’s been going.

100 Days of Python, Projects 51-53 #100DaysofCode

Here we are now with a few more automated bot tasks.  It’s been a fun series of lessons, though I enjoyed using Beautiful Soup more then Selenium.  Selenium runs into too many anti-bot measures on the web to be truly effective.  I mean, it’s definitely a useful too, but in my experience, it’s not reliable enough.  BS seems to be much more effective, though it can’t really interact with pages.

In the long run, I think I am more just irritated by “clever bull shit” on web pages that makes both pieces of software a pain to work with. Take Instagram, none of the classes or ids are anything but jumbled characters.  The code feels like it was written by a machine, and it probably was.

Also, this round is a bit shorter than before because the course is veering off into a new direction with Flask Apps, so it seemed appropriate to wrap things up on the Automation Section of the Projects.

Day 51 – Twitter Speed Complainer

This project is great, because this is something I have tried to run from other people’s code but it never seems to actually work.  Now, I just have my own code to run.

EZ Mode.

It will need something with a desktop to run it on, but I have a while Windows PC for running random shit and a mess of Raspberry Pis.  I don’t even care about the complaining part, in fact, I would rather not, I just want to track Internet speed.  I may even change this ti push to a spread sheet or database or something later.

But for now, it Tweets.

So, the Speed test part was easy, though I used SpeedOf.me instead of SpeedTest.net, because SpeedTest.net supposedly will give dodgy numbers by partnering with ISPs and putting servers in ISP data centers.  I just prefer SpeedOf.me mostly, it’s cleaner.

The Twitter part was tricky… ish…  So, a common problem I keep running into with Selenium, is it thinks my Bot Programs, are Bots.  

I’m so offended for my Bots, accusing them of being Bots.   They run into captchas and email verifications and just flat out fail to log in or load half the time. It makes sense, captchas and email verifying exist, 100% to stop people from abusing things like, Selenium. Fortunately, Twitter Bots is one thing I do have a fair amount if experience with. I wrote one ages ago that just tweeted uptime of the server.  I wrote one script that would pull lines from a text file and tweet them out at an interval.  I have another Python based bot that tweets images.  What do these Bots do differently?  They are 100% Bots, running with the proper Twitter Bot Based AI, and labeled as such.  

So, since Selenium was being a pain to deal with using Twitter, I pulled out my Image Posting Bot code and scavenged out the pieces I needed, which was about 4 or 5 lines of code.  It uses a Python Library called Tweepy.  In order to use Tweepy, you have to use the Twitter Developer console to get API Keys, which I already had.

Day 52 – Instagram Follower

Another almost useful project. For this project, you open up Instagram and log in, then it opens an account of your choosing, and follows, anyone following that account.

Now, while I have a love/hate relationship with Instagram, I am not super interested in cluttering up my feed with thousands of accounts.  So, while I did complete the task, I set it up to ONLY follow the first 10 accounts.   I also added a check to make sure I wasn’t already following said account.

I may revisit this again later with other, more useful ways to interact with IG.  Maybe instead of following random people from another account, it auto follows back.  Or maybe it goes through “suggested” and looks for keywords in a person’s profile and follows them.

Day 53 – Zillow Data Aggregator Capstone

The final project for this section combines Selenium and Beautiful Soup to aggregate real estate listings from Zillow into a Google Spreadsheet doc.  I quite liked this one actually, it’s straight forward and relatively harmless.  I did run into an issue where it started thinking I was a Bot, but by that point, I knew I could successfully scrape what I needed from Zillow, so I commented out the Zillow call and replaced it with a file load using an HTML file snapshot of the Zillow page.

This was very easy to slide in as a fix because I was already pulling the source code using Selenium into a variable, then passing that variable to Beautiful Soup.  It was simply a matter of passing the file read instead.

Scraping the data itself was a bit tricky, Zillow seems to do some funny dynamic loading so my number of listings and addresses and prices didn’t always match.  To solve this, I added a line that just uses whichever value is the smallest.  They seem to capture in order, but eventually, some fell off, so if I got 8 prices and 10 addresses, I just took the first 8 of each.

Another issue I came across, the URLs for each listing, don’t always have a full URL.  Sometimes you had to add “https://www.zillow.com” to the front.  It wasn’t a hard fix,

if “zillow” not in link:

link = “https://www.zillow.com”+link

There was also an issue with the links because each link shows up twice using the scrape I was using.  A quick search gave a clever solution to remove duplicates.  It’s essentially:  list = dictionary converted to list(list converted to dictionary).  A Dictionary can’t have duplicate keys, so those get discarded converting the list to a dictionary, and then that result just gets flatted back out into a dictionary.

Lastly was the form entry itself.  The Data Entry uses a method I’ve used before for entering data to Google remotely, with Google Forms.  Essentially, Selenium fills out and submits the form over and over for each result.  I had a bit of issue here because the input boxes uses funny tags and are hard to target directly.  Then my XPATHs were not working properly.  I fixed this by adding two things, one, I had Selenium open the browser maximized, to make sure everything loaded.  Second, I added more sleep() delays here and there, to make sure things loaded all the way.  

One thing I have found working with Selenium, you can never have too many sleep()s.  The web can be a slow place.