Coding

Code Project: Fresh RSS to WordPress Digest

I actually briefly mentioned this project when I write about moving from TinyTinyRSS to FreshRSS. This has become a bit of an evolving and ongoing project however, so I’ve decided to catalogue it in it’s own page. This little script worked out much better than I expected, and I’ve modified it a bit over time, and have ideas to modify it going forward even more. Starting off, the code can be found here in this Github GIST.

I’ve left a bit of commented out code that i might use later for troubleshooting or adding additional features. The general gist of the code, it pulls the last 24 hours worth of news stories I have favorited from my FreshRSS install, then formats them into a digest format and posts it here, in this blog. They get sorted into their own category, you can find them here.

This is basically a thing I’ve seen others do that I’ve wanted to do for a while. It’s also partially just for my reference more than anything, it’s sort of a log of everything I have found interesting on a particular day more than anything. Others may or may not find it interest, which is why I also filter that category out of the home page feed.

Originally, it was just a list of URLs and titles. I realized that it might be useful to have SOME idea what the link was about before clicking it, so I have been playing with the summary as well. My first attempt was a bit dodgy because it actually posted the entire article as the summary. Currently, it just arbitrarily chops it off at a few hundred characters. I want to improve it even farther at some point by pushing it through some summarizing AI and getting an actual proper summary but I have not gotten there yet.

There re a few other things I want to add but I’m not sure they re easily possible. Firstly, I would love to be able to parse some sort of categories into the digest. So say, all the “Video Game” links are together and Music links are together. FreshRSS has categories but they don’t seem to show up in the feed anywhere.

This would also allow me to split these posts between this blog and my other blog, Lameazoid. I do share interesting video games news from FreshRSS, but I mostly don’t share Toy related articles, because it feels a little TOO FAR out there for what I want to post to this blog. If there were a way to have the categories, I could easily have the script split the feed by categories and post a digest to each blog.

I also wish there was a way to add my own notes and commentary occasionally. I don’t think it showed up in the feed either, but TinyTinyRSS had a notes feature. I am not sure if FreshRSS has that as well. I probably should try to at least suggest these features to the creators on GitHub, or maybe get really adventurous and create my own plug-ins for FreshRSS to accomplish these tasks.

Code Project: VLC Portable Playlist to Text Dump

It’s kind of funny how one post can lead to another sometimes.  This one is pretty basic but it also just shows a bit how useful I find knowing my way around computer systems to be.  Yesterday I posted about my little annual music playlists.  And as part of that, I wanted to actually post the playlist. I am pretty sure there is a fairly universal “playlist file type” out there and being open source, I had assumed that VLC on my phone stored the playlists somewhere in playlist files.

That assumption was wrong, it uses a .db file.  A little portable database.  There is an option to dump this file to the root of the phone, presumably for backup purposes, but it’s also useful to just browse it like I am doing here.  The file itself can be opened and browsed with SQL Lite’s DB manager.  It’s standard databases inside for tracks and artists and playlists.

Fortunately, I have had some experience dealing with database queries, so I set about building what was needed tog et the data I wanted.  Pull the Playlist I want, in this case “2023 Best” but I could change that to do any available Playlist.  This gives the tracks by id, but the tracks themselves are stored in a separate table for media.  So that needs joined in.  The media table stores track names, but not artist names, so an additional join is needed to get the artist names.  This complicated things a bit because both the playlist table and artist table have a column “name” so more clarity needed to be added.

The result was this little query that dumps out a basic table of Artist and Song title.

SELECT Artist.name, Media.title 
FROM Playlist
Inner Join playlistmediarelation ON playlist_id=id_playlist
Inner Join Media ON id_media=media_id
Inner Join Artist ON media.artist_id=Artist.id_artist
WHERE Playlist.name = '2023 Best'
ORDER BY Artist.name

Now, I could have done some cute clever trick now to merge the two into a new column and add in a ” – ” between but it was easier to drop it all into a notepad file and do a fine/replace on the weird space character that it stick in between the Artist and track title.

The added bonus here is I can easily use this query again anytime I want to dump a Playlist to text.

Code Project: Automated List From Reddit Comments

This is one of those quick and kind of dirty projects I’ve been meaning to do for a while. Basically, I wanted a script that would scrape all of the top level comments from a Reddit post and push them out to a list. Most commonly, to use on /r/AskReddit style threads like, well, for this example, “What is a song from the 90s that young people should listen to.”

Basically, threads that ask for useful opinions on list. Sometimes it’s lists of websites or something. Often it’s music. The script here is made for music but could be adjusted for any thread. Here is the script, I’ll touch on it a bit in more detail after.

## Create an APP for Secrets here:
## https://www.reddit.com/prefs/apps

import praw

## Thread to scrape goes here, replace the one below
url = "https://www.reddit.com/r/Music/comments/10c4ki0/name_one_90s_song_kids_born_after_2000_should_add/"

## Fill in API Information here
reddit = praw.Reddit(
    client_id="",
    client_secret= "",
    user_agent= "script by u/", # Your Username, not really required though
    redirect_uri= "http://localhost:8080",
)


submission = reddit.submission(url=url)
submission.comments.replace_more(limit=0)
submission.comment_limit = 1

for x in submission.comments:
    with open("output.txt", mode="a", encoding="UTF-8") as file:
        if "-" in x.body:
            file.write(str(x.body)+"\n")
            # print(x.body)

The script uses praw, Python Reddit API Wrapper. A Library made for use in Python and the Reddit API. It requires free keys which can be gotten here: https://www.reddit.com/prefs/apps. Just create an app, the Client ID is a jumble of letters under the name, the secret is labeled. User Agent can be whatever really, but it’s meant to be informative.

The thread URL also needs filled in.

The script then pulls the thread data and pulls the top level comments.

I’m interested in text file lists mostly, though for the sake of music based lists, if I used Spotify, I might combine it with the Spotify Playlist maker from my 100 Days of Python course. Like I said before though, this script is made for pulling music suggestions, with this but of code:

        if "-" in x.body:
            file.write(str(x.body)+"\n")
            # print(x.body)

It’s simple, but if the comment contains a dash, as in “Taylor Swift – Shake it Off” or “ACDC – Back in Black”, it writes it to the file. Otherwise it discards it. There is a chance it means discarding some submissions, but this isn’t precision work so I’m OK with that to filter out the chaff. If I were looking for URLs or something, I might look for “http” in the comment. I could also eliminate the “if” statement and just have it write all the comments to a file.

Advent of Code 2022, I’m Done

Well, I made it farther than my last “in real time attempt” in 2020 by 3 starts. I may check in one the puzzles each day, but my experience is, they only get more complex as time goes on, so I doubt I’ll be completing any more of them. Each day is starting to take a lot more time to solve out, the solutions are getting a lot more finicky to produce. We’ve also reached the point where the puzzle inputs also feel ridiculously obtuse. Like the Day 15 puzzle, where every number was in the millions, basically, for the only purpose of making everything slow without some sort of magic reduction math. Though skimming through other’s solutions, there didn’t seem to really BE any “magic reduction” option there. \

Which is fine. It’s not supposed to be easy. I don’t expect it to be easy.

But I have long ago accepted that things I’m doing for relaxation or enjoyment, should at least be relaxing and enjoyable. And These puzzles have reached a point where the amount of enjoyment and relaxation I get from them is no longer worthwhile.

So I’m choosing to end this year’s journey here.

Maybe I’ll go back and finish them some day, but more at my own leisure. I mean, I had started doing the old 2015 puzzles in the week leading up to this year’s event. I was never doing this in any attempt to get on the leader boards or anything anyway, hell I didn’t even start most day’s puzzles until the day was half over or later.

For what it’s worth, i did make a strong attempt on Day 15 but I just could not get it to output the correct answer, and I’m not real sure why. I couldn’t even get the sample input to work out, I was always one off. It’s possible, and likely, I was counting the space where the beacon existed, but my actual input data was off by a little over 1 million, and there are not 1 million beacons on the board. Plus it was 1 million under, where my sample input solution was 1 over.

I’m not even attempting today’s, for Day 16. I can see the logic needed, but the nuance to accomplish it will just take me too long to code out and like I said above, enjoyment and relaxation is the point. I don’t need to add hours of stress to my day.

Advent of Code 2022, Day 14

Man, I really enjoyed today’s puzzle. Like, a lot. I think because it kind of felt like a game level, and probably also because it’s fluid dynamics and I am totally into Physics and Engineering shit.

For the “Plot” you enter into a cave and discover a cavern with sand falling from the ceiling. The sand accumulates in a pile and “flows” around based on some simple left then right rules. This problem consisted of a few separate but connected steps.

Step one, create an empty “cave”. This was simple enough, especially now that I remember how stupid lists are. Last time I needed to make a grid, I was appending a list and it turns out that Python doesn’t actually copy lists unless you explicitly ask it to. Which is frankly, “Fucking Stupid”. But whatever, list.copy() works too.

Step 2, draw the rocks from the input file. Each line consists of a start note, then a series of connected dots to the end point of a line of rocks.

Step 3, was to pour the sand. Which involves dropping a “chunk” of sand, down until it hits the floor, then flowing it left or right to fill an area. Once the sand starts falling off, then display the count of the total chunks. If I were more clever about my code, I could build a sweet little ASCII animation of each step, but I probably won’t anytime soon because well, I have other things I need to do too.

Part 2 modifies this, by adding a floor, instead of counting the amount of sand until it fills, and falls into the abyss below, now you count until it fills then fills all the way back to the top. This actually screwed me up a bit.

The coordinates given are all large, like, in the 500 range. In order to make my rock formation manageable, I had cut these down by the min max values so the cave was not much wider than the rock formation. The problem is, now I need to accumulate a pile across the floor, so I need the width. Like, a LOT of width. So I had to modify my code all over to bring the width back to my cave matrix.

The code works for Part 1 and Part 2 at once. Basically, it finishes Part 1, like normal, display the output count, and, just for fun, an ASCII image of the filled rocks, then, it just, starts a fresh, slightly modified loop. For the modified loop, the break for “falling off” is removed. Instead, it checks to see if it can move, and if it can’t, before placing the sand block, it verifies if it moved at all by comparing it’s position to the start position. If it hasn’t moved, it breaks the loop, prints the filled screen, and the sand count total.

import math

with open("Day14Input.txt") as file:
    data = file.read()

lines = data.split('\n')

def draw_cave(lx, ly):
    grid = []
    line = []
    floor = []
    for i in range(0,lx*2):
        line.append(".")
    for j in range(0,ly+2):
        grid.append(line.copy())
    for i in range(0, lx * 2):
        floor.append("#")
    grid.append(floor)
    return grid

def draw_rocks(rocks,cave):
    for rockline in rocks:
        for i in range(len(rockline)-1):
            startx = int(rockline[i][0])
            starty = int(rockline[i][1])
            endx = int(rockline[i+1][0])
            endy = int(rockline[i+1][1])
            if starty == endy:
                xrange = sorted([startx, endx])
                for horiz in range(xrange[0],xrange[1]+1):
                    cave[starty][horiz] = "#"
            if startx == endx:
                yrange = sorted([starty, endy])
                for vert in range(yrange[0], yrange[1]+1):
                    cave[vert][startx] = "#"
    return cave

def show_cave():
    for i in cave:
        print(" ".join(i))

smallest_x = 100000
smallest_y = 100000
largest_x = -1
largest_y = -1
rocks = []
for line in lines:
    sets = line.split(" -> ")
    r = []
    for n in sets:
        nsplit = n.split(",")
        if int(nsplit[0]) < smallest_x:
            smallest_x = int(nsplit[0])
        if int(nsplit[1]) < smallest_y:
            smallest_y = int(nsplit[1])
        if int(nsplit[0]) > largest_x:
            largest_x = int(nsplit[0])
        if int(nsplit[1]) > largest_y:
            largest_y = int(nsplit[1])
        r.append(n.split(","))
    rocks.append(r)

# print(f"{smallest_x} {largest_x} | {smallest_y} {largest_y}")
# print(rocks)

cave = draw_cave(largest_x,largest_y)
# show_cave()
rocky_cave = draw_rocks(rocks,cave)

sand_start = 500
rocky_cave[0][sand_start] = "+"

captured = True
sand_count = 0
while captured:
    sand_pos = [0,sand_start]

    sand_drop = True
    while sand_drop:
        if sand_pos[0] > len(rocky_cave)-3:
            captured = False
            sand_drop = False
        elif rocky_cave[sand_pos[0]+1][sand_pos[1]] == ".":
            sand_pos[0] += 1
        elif rocky_cave[sand_pos[0]+1][sand_pos[1]-1] == ".":
                sand_pos[0] += 1
                sand_pos[1] -= 1
        elif rocky_cave[sand_pos[0]+1][sand_pos[1]+1] == ".":
                sand_pos[0] += 1
                sand_pos[1] += 1
        else:
            sand_count+=1
            rocky_cave[sand_pos[0]][sand_pos[1]] = "O"
            sand_drop = False

    # show_cave()

print(sand_count)
show_cave()
# Part 1 = 728

#### RESUME FOR PART 2 #####
captured = True
while captured:
    sand_pos = [0,sand_start]

    sand_drop = True
    while sand_drop:
        if sand_pos[0] > len(rocky_cave)-1:
            captured = False
            sand_drop = False
        elif rocky_cave[sand_pos[0]+1][sand_pos[1]] == ".":
            sand_pos[0] += 1
        elif rocky_cave[sand_pos[0]+1][sand_pos[1]-1] == ".":
                sand_pos[0] += 1
                sand_pos[1] -= 1
        elif rocky_cave[sand_pos[0]+1][sand_pos[1]+1] == ".":
                sand_pos[0] += 1
                sand_pos[1] += 1
        else:
            if sand_pos == [0,sand_start]:
                captured = False
            else:
                rocky_cave[sand_pos[0]][sand_pos[1]] = "O"
            sand_count+=1
            sand_drop = False

print(sand_count)

show_cave()
# Part 2 = 27623