Code Project: Fresh RSS to WordPress Digest V 2

A while back, I talked about a little simple project that I build that produces a daily RSS digest post on this blog. This of course broke when my RSS Reader died on me. I managed to get Fresh RSS up and running again in Docker, and I’ve been slowly recovering my feeds, which is incredibly slow and tedious to do because there are a shitload of feeds, and i essentially have to cut and paste each URL into FreshRSS, and select the category and half the time they don’t work, so I need to make a note of it for later checking and it’s just… slow.

But since it’s mostly working, I decided to reset up my RSS poster. I may look into setting up a Docker instance just for running Python automations, but for now, I put it on a different Pi I have floating around that plays music. The music part will be part of a different post, but for this purpose, it runs a script, once a day, that pulls a feed, formats it, and posts it. It isn’t high overhead.

While poking around on setting this up, I decided to get a bit more ambitious and found out that, basically every view has it’s own RSS feed. Previously, I was taking the feed from the Starred Articles. But it turns out that Tags each have their own feed. This allowed me to do something I wanted from the start here, which is create TWO feeds, for both of my blogs. So now, articles related to Technology, Politics, Food, and Music, get fed into Blogging Intensifies, and articles related to toys, movies, and video games, go into Lameazoid.

I’ve also filtered both of these out of the main page. I do share these little link digests for others, if they want to read them, but primarily, it’s a little record for myself, to know what I found interesting and was reading that day. This way if say, my Fresh RSS reader crashes, I still have all the old interesting links available.

The other thing I wanted to do was to use some sort of AI system to produce a summary of each article. Right now it just clips off the first 200 characters or so. At the end of the day, this is probably plenty. I’m not really trying to steal content, I just want to share links, but links are also useful with just a wee bit of context to them.

I mentioned before, making this work involved a bit to tweaking to the scrips I was using. First off is an auth.py file which has a structure like below, one dictionary for each blog, and then each dictionary gets put in a list. Adding additional blogs would be as simple as adding a new dictionary and then adding the entry to the list. I could have done this with a custom Class but this was simpler.

BLOG1 = {
    "blogtitle": "BLOG1NAME",
    "url": "FEEDURL1",
    "wp_user": "YOURUSERNAME",
    "wp_pass": "YOURPASSWORD",
    "wp_url": "BLOG1URL",
}

BLOG2 = {
    "blogtitle": "BLOG2NAME",
    "url": "FEEDURL2",
    "wp_user": "YOURUSERNAME",
    "wp_pass": "YOURPASSWORD",
    "wp_url": "BLOG2URL",
}

blogs = [BLOG1, BLOG2]

The script itself got a bit of modification as well, mostly, the addition of a loop to go through each blog in the list, then some variables changed to be Dictionary look ups instead of straight variables.

Also please excuse the inconsistency on the fstring use. I got errors at first so I started editing and removing the fstrings and then realized I just needed to be using Python3 instead of Python2.

from auth import *
import feedparser
from wordpress_xmlrpc import Client, WordPressPost
from wordpress_xmlrpc.methods.posts import NewPost
from wordpress_xmlrpc.methods import posts
import datetime
from io import StringIO
from html.parser import HTMLParser

cur_date = datetime.datetime.now().strftime(('%A %Y-%m-%d'))

### HTML Stripper from https://stackoverflow.com/questions/753052/strip-html-from-strings-in-python
class MLStripper(HTMLParser):
    def __init__(self):
        super().__init__()
        self.reset()
        self.strict = False
        self.convert_charrefs= True
        self.text = StringIO()
    def handle_data(self, d):
        self.text.write(d)
    def get_data(self):
        return self.text.getvalue()

def strip_tags(html):
    s = MLStripper()
    s.feed(html)
    return s.get_data()

# Get News Feed
def get_feed(feed_url):
    NewsFeed = feedparser.parse(feed_url)
    return NewsFeed

# Create the post text
def make_post(NewsFeed, cur_blog):
    # WordPress API Point
    build_url = f'https://{cur_blog["wp_url"]}/xmlrpc.php'
    #print(build_url)
    wp = Client(build_url, cur_blog["wp_user"], cur_blog["wp_pass"])

    # Create the Basic Post Info, Title, Tags, etc  This can be edited to customize the formatting if you know what you$    post = WordPressPost()
    post.title = f"{cur_date} - Link List"
    post.terms_names = {'category': ['Link List'], 'post_tag': ['links', 'FreshRSS']}
    post.content = f"<p>{cur_blog['blogtitle']} Link List for {cur_date}</p>"
    # Insert Each Feed item into the post with it's posted date, headline, and link to the item.  And a brief summary f$    for each in NewsFeed.entries:
        if len(strip_tags(each.summary)) > 100:
            post_summary = strip_tags(each.summary)[0:100]
        else:
            post_summary = strip_tags(each.summary)
        post.content += f'{each.published[5:-15].replace(" ", "-")} - <a href="{each.links[0].href}">{each.title}</a></$                        f'<p>Brief Summary: "{post_summary}"</p>'
        # print(each.summary_detail.value)
        #print(each)

    # Create the actual post.
    post.post_status = 'publish'
    #print(post.content)
    # For Troubleshooting and reworking, uncomment the above then comment out the below, this will print results instea$    post.id = wp.call(NewPost(post))

    try:
        if post.id:
            post.post_status = 'publish'
            call(posts.EditPost(post.id, post))
    except:
        pass
        #print("Error creating post.")

#Get the news feed
for each in blogs:
    newsfeed = get_feed(each["url"])
# If there are posts, make them.
    if len(newsfeed.entries) > 0:
        make_post(newsfeed, each)
        #print(NewsFeed.entries)

Re-mulching and other Activities Outside the House

I have been slacking on my posts, though technically still doing better than I had been. It’s a combination of being busy and just being generally meh overall. One think keeping me busy was re-mulching the flower beds around the house. Not just throwing down new mulch though, I mean raking up the old and putting down new weed barrier. This meant going around the existing plants and the little metal stakes to hold the weed barrier down were a pain because there is a ton of super packed rock in the area that makes them hard to insert into the ground.

In the case of the tree out back, it also meant digging up the ground around the tree to add a new flower bed space completely. We added a lot of new plants to the area as well, though most in pots for ease of use.

Then my wife put all her decor out again.

We also started working on the basic garden set up for the year. In the past we’ve had issues with trying to garden at this house because there is a lot of wildlife that comes around that eat or dig up everything. Right now it’s in buckets, though I plan to put legs on these wooden boxes we have to put the buckets into. Which is part of what the pile of wood behind the garden plants at the bottom is for. We also may use the stairs as a tiered herb garden. It’s all wood that was salvaged from my parent’s deck which they recently had replaced.

Anyway, here are some photos of the completed set up.

Here is a random bonus of the backyard from when I was mowing recently.

Code Project – JavaScript Pixel Camera

Sometimes I do projects that end up being entirely fruitless and pointless.

Ok, maybe not entirely fruitless, but sometimes I get caught up in an idea, and end up essentially just sort of, reinventing the wheel, in a complicated way. For starters, I got caught up with this little tutorial here: https://thecodingtrain.com/challenges/166-image-to-ascii . It uses JavaScript, to convert the input from your webcam, into ASCII art. It’s a nice little step by step process that really explains what is going on along the way so you get a good understanding of the process.

And I got it to work out just fine. I may mess with it and add the darkness saturation back in because it helps bring emphasis to the video but the finished product is neat.

I then decided I wanted to see if I could modify the code to do some different things.

Specifically, Instead of ASCII characters, I wanted it to show colored blocks, pixels if you will. I managed to modify the code, and the output is indeed a pixilated video, but it’s not video, it’s constantly refreshing text. The problem is, it’s writing out a page where every block, is wrapped in a <font> styling tag, which means it’s writing out a ton of extremely dense HTML code and pushing it out, so it’s a little weird and laggy and pretty resource intensive to run.

I also image, there is a way to simply, downsize the input resolution, and upscale the video, to achieve the exact same effect.

One variant I made also just converts images to single page documents of “pixels”. but, ugly font based pixels, to achieve an effect you can get by resizing an image small then large.

Like I said, kind of fruitless and pointless, but I got caught up in the learning and coding part of things.

I also may go ahead and do some additional modifications to the code to make things a bit more interesting. I could try using it to make my own Game Boy Camera style interface, for example. Then make it output actual savable pictures. I found a similar project to that online but it would be cool to code my own up and give it an actual interface.

Anyway, here is the code for the jankey video JavaScript, sketch.js first then index.html after. Also a Demo is here, though I may remove it int he long run, especially if I make an improve project.

let video;
let asciiDiv;

function setup() {
  noCanvas();
  video = createCapture(VIDEO);
  video.size(48, 48);
  asciiDiv = createDiv();
}

function draw() {
  video.loadPixels();
  let asciiImage = '';
  for (let j=0; j < video.height; j++) {
     for (let i = 0; i <video.width; i++) {
      const pixelIndex = (i+j * video.width) * 4;
      const r = video.pixels[pixelIndex + 0];
      const g = video.pixels[pixelIndex + 1];
      const b = video.pixels[pixelIndex + 2];
      const pixelColor = "rgb(" + String(r) + "," + String(g) + "," + String(b) + ")";
       // console.log(pixelColor);
//      const c = '<font color=rgb('+r+','+g+','+b+')>'+asciiShades.charAt(charIndex)+'</font>';
      const c = '<span style="color: '+pixelColor+';">&#9724;</>';
      asciiImage += c;
    }
    asciiImage += "<br/>";

  }
  asciiDiv.html(asciiImage);

}

index.html:

<!DOCTYPE html>
<html lang="en">
  <head>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.6.0/p5.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.6.0/addons/p5.sound.min.js"></script>
    <link rel="stylesheet" type="text/css" href="style.css">
    <meta charset="utf-8" />

  </head>
  <body>
    <main>
    </main>
    <script src="sketch.js"></script>
  </body>
</html>

Code Project – Python Flask Top Ten Movies Site

So, I mentioned dumping the Flask Blog a while back, but then I decided that since I had managed to get it all working, it would be somewhat trivial to get it working on a subdomain, instead of a main domain, which had been my original plan to start with.  I was never too excited about dropping this project because I have a few ideas for little projects that I wanted to build that would actually work pretty well in Python and Flask, since it essentially adds a direct path to running a back end style script and a front in interface.  Part of my frustration had come from trying to integrate OTHER Flask projects into the same website and Python code.  Specifically the Top Ten Films website.  

So I stripped out all of the work I had done on integrating the Top Ten movies site, and got the bare Blog running on smallblog.bloggingintensifies.com.  That’s not a link, don’t bother trying to go there, there is nothing there (More on that in a bit).  In this process, I got to thinking, I could run separate Flask Instances, one for each project, though I feel like that’s probably kind of super inefficient for server overhead.  I went and did it anyway, with the Top Ten Movies site.

Which worked fine.

At this point, I realized, I had a working copy of this website to work with and test.  A lot fo my previous frustration was adapting and merging two Python files, which share some redundancy, which share the same names on some secondary files and variables.  I could now, modify the Top Ten Movies code, and test it, to remove the problem duplication.  Satisfied, I shut off the Flask Blog, and merged the code again, and, it worked!  Worked as expected.  I did update the code a bit more again to add in the user/admin features of the Blog to the Movies page, so no one else can change the list.  

I also changed the subdomain from smallblog.bloggingintensifies, to flask.bloggingintensifies.com.  I mostly plan to use this sub domain to show off Flask Python projects, not just this blog I’m never going to fully utilize, so the name change makes more sense.

I also used some of the other HTML knowledge I’ve gained, well, the modern HTML knowledge, to reformat the Movie Site from how it was built in the class.  The class just had a long single column with ten movies.  I’ve changed it so number 1 is large and everything else is a smaller grid.  It’s much prettier looking now.  I doubt it changes much, if at all, but it’s a neat and fun project.

A Progressive Journey Through Working With AI Art – Part 6 – AI Is Boring

A few months ago I started a sort of series on going through using Stable Diffusion and AI Art.  I had some ideas for some more parts to this series, specifically on “Bad Results” and another possibly going into Text based AI.  I never got around to them.  Maybe I’ll touch on them a bit here.  The real point I want to make here…

I kind of find AI to be boring.

That’s the gist of it.  On the surface, it’s a really neat and interesting concept.  Maybe over time it gets better and actually becomes interesting.  But as it is now, I find it’s pretty boring and a little lame.  I know this is really contradictory to all the hype right now, though some of that ype may be dying a bit as well.  I barely see anything about AI art, it’s all “ChatGPT” now, and even that seems like it’s waiting a bit in popularity as people accept that it’s just, “Spicy Autocomplete”.

Maybe it’s just me though, maybe I’m missing some of the coverage because I am apathetic to the state of AI.  I also don’t think it’s going to be the end all be all creativity killer.  It’s just, not that creative.  It’s also incredibly unfulfilling, at least as a person who ostensibly is a “creator”.  I am sure boring bean counter types are frothing at the idea of an AI to generate all their logos or ad copy or stories so they can fire more people and suck in more money not having to pay people.  That’s a problem that’s probably going to get worse over time.  But the quality will drop.

But why is it boring?

Let’s look at the actual process, I’ll start with the image aspect, since I’ve used it the most.  You make a prompt, maybe you cut and paste in some modifier text to make sure it comes out not looking like a grotesque life-like Picasso, then you hit “generate”.  If you’re using an online service, you probably get like, 20-25 of these generations a month for free, or you get to pay some sort of subscription for more.  If you are doing it locally, you can just do it all you want.  And you’re going to need to do it a lot.  For every perfect and gorgeous looking image you get, you’re probably getting 20 really mediocre and weird looking images.  The subject won’t be looking the right direction, they will have extra limbs, or lack some limbs, the symmetry will be really goofy.  Any number of issues.  Often it’s just, a weird blob somewhere in the middle that feels like it didn’t fill in.  Also often, with people, the proportions will be all jacked up.  Weird sized head, arms or legs that are not quite the right length.

You get the idea.  This touches on the “bad results” I mentioned above.  Stable Diffusion is great at generating bad results.

It also, is really really bad at nuance.    The more nuance you need or want, the less likely you will get something useful.  Because it’s not actually “intelligent”.  It’s just, “making guesses”.  

You do a prompt for “The Joker”, you will probably get a random image of the Batman villain.  “The Joker, standing in a warehouse” might work after 3 or 4 tries, though it probably will give you plenty of images that are not quite “a warehouse.”  

But you want say, “The Joker, cackling madly while being strangled by Batman in a burning warehouse while holding the detonator for a bomb.”  You aren’t going to get jack shit.  that’s just, too much for AI to comprehend.  It fails really badly anytime you have multiple subjects, though sometimes you can get side by sides.  It fails even ore when those two people are interacting, or each doing individual things.  If you are really skilled you can do in painting and lots of generation to maybe get the above image, but at that point, you may as well just, draw it yourself.  Because it would probably take less time.  

In the end, as I mentioned, it’s also just, unfulfilling.  Maybe you spend all day playing with prompts and in painting and manage to get your Joker and Batman fighting image.  And so what?  You didn’t draw it, you didn’t create it, you pay as well have done a Google Image search or flipped through Batman comics to find a panel that matches this description.  You didn’t create anything, you hit refresh on a webpage for hours.

Even just, manually Photoshopping some other images together would be more fulfilling of an experience.  And the result will probably be better, since AI likes to give all these little tells.

Then there is text and ChatGPT.  I admit, I have not used it quite as much, but it seems to be mostly good at producing truthy Wiki-style articles.  It’s just the next generation Alexa/Siri at best.  It’s also really formulaic in it’s results.  It’s very, “this is a 5th grade report” in it’s structure for anything it writes.  Intro, three descriptive paragraphs, an outro restating your intro.  

Given how shit the education system is anymore, I guess it’s not that surprising this feels impressive.

Another issue is that it’s so sterile in it’s responses.  There were some things going around about how to “hack” it into making it say dirty things, but most of that has been ironed out.  I mean, I don’t NEED my AI chat bot to swear, or say mean things, but it goes to such extremes to avoid this that it kind of sucks the fun out of things.  For example, i asked it “Who would win in a cage match battle to the death between Siri, Cortana, and Alexa” and it tells me it can’t answer that because it’s violent.

It’s AI constructs.  I just want to get a stupid silly response.  Please pretend that you’re talking about fictional entities having a fictional fight.  Meanwhile, on this site Character.ai, which lets you create AI bots with personalities of people, real and fake, I have antagonized a Tom Riddle bot into trying to kill me, repeatedly.

It’s funny as shit.  It’s way more compelling and fun than WikipediaBotGPT.

I also tried to make some code with ChatGPT.  It does, ok-ish, with popular languages like Java, HTML, Python etc.  I asked it to make some LSL (Linden Scripting Language) code and it made something that follows the proper structure, but does nothing.  I got similar results trying to get it to create Pong in NES Assembly that I could compile into a ROM.  I had a friend look over the code and help me follow it and it basically gave the 2nd paddle nothing to do.  It also didn’t seem to actually draw out any graphics anywhere.  Which is another issue with the code part.  It can produce single file responses.  It’s not going to produce base code, and some sprites to go with it, and maybe some imported files that do other things.  

(The code below does not do anything)

; Pong game for NES
; Written in 6502 Assembly

; Constants
P1_PADDLE_Y = $20  ; P1 paddle vertical position
P2_PADDLE_Y = $80  ; P2 paddle vertical position
BALL_RADIUS = $03  ; Ball radius

; Variables
P1_SCORE = $00      ; P1 score
P2_SCORE = $00      ; P2 score
BALL_X = $80        ; Ball horizontal position
BALL_Y = $60        ; Ball vertical position
BALL_DX = $01       ; Ball horizontal velocity
BALL_DY = $01       ; Ball vertical velocity

; Main loop
MAIN:
    ; Read input
    LDA $4016   ; Read controller 1 input
    AND #%00000001  ; Mask A button
    BEQ .P1_DOWN    ; If A button not pressed, go to P1_DOWN
    INC P1_PADDLE_Y ; If A button pressed, move P1 paddle up
    JMP .UPDATE_P1_PADDLE ; Go to UPDATE_P1_PADDLE
.P1_DOWN:
    LDA $4016   ; Read controller 1 input
    AND #%00000010  ; Mask B button
    BEQ .P1_STILL   ; If B button not pressed, go to P1_STILL
    DEC P1_PADDLE_Y ; If B button pressed, move P1 paddle down
    JMP .UPDATE_P1_PADDLE ; Go to UPDATE_P1_PADDLE
.P1_STILL:
    ; Do nothing

.UPDATE_P1_PADDLE:
    ; Update P1 paddle position
    STA $2003  ; Set PPU address to P1 paddle Y position
    LDA P1_PADDLE_Y
    STA $2004
    JMP .UPDATE_BALL  ; Go to UPDATE_BALL

.UPDATE_P2_PADDLE:
    ; Update P2 paddle position
    STA $2003  ; Set PPU address to P2 paddle Y position
    LDA P2_PADDLE_Y
    STA $2004
    JMP .UPDATE_BALL  ; Go to UPDATE_BALL

.UPDATE_BALL:
    ; Update ball position
    CLC
    LDA BALL_X
    ADC BALL_DX
    STA BALL_X
    LDA BALL_Y
    ADC BALL_DY
    STA BALL_Y

    ; Check for collision with top or bottom wall
    LDA BALL_Y
    CMP #%00000011
    BCC .NO_COLLISION_TOP
    SEC
    SBC #%00000100
    STA BALL_Y
    LDA BALL_DY
    EOR #%11111111
    INC BALL_DY
    EOR #%11111111
.NO_COLLISION_TOP:
    LDA BALL_Y
    CMP #%11111100
    BCS .NO_COLLISION_BOTTOM
    SEC
    ADC #%00000100
    STA BALL_Y
    LDA BALL_DY
    EOR #%11111111
    INC BALL_DY
    EOR #%11111111
.NO_COLLISION_BOTTOM:

    ; Check for collision with P1 paddle
    LDA BALL_X
    CMP #%00000100
    BCC .NO_COLLISION_P1
    LDA BALL_X
    CMP #%00000100+BALL_RADIUS
    BCS .NO_COLLISION_P1
    LDA BALL_Y
    CMP P1_PADDLE_Y
    BCC .

Like generating the Joker/Batman image, it’s just not that smart.  It’s auto-completing a response based on probabilities.  It doesn’t understand how to actually break down code into parts, or what other files may be needed to make the code work.

A lot of the problem in general I think, is the more you use these tools, the more the trick becomes glaringly obvious.  The repetition in results, both images and text, really how how completely unintelligent, the “Artificial intelligence” is.  It’s just regurgitating the same things, over and over, with slightly different phrasings.