Coding

100 Days of Python, Projects 58-65 #100DaysofCode

Judging by the comments on the lessons, The lessons are getting harder, though frankly, I am finding they are a bit easier.  I feel like my experience with Web Design is a lot of this.  I also have been looking a bit into how to properly host some of these apps to share here, on my Code Projects Page.  I believe I could set them up to run on a production Flask environment, with different ports, then just map sub directories n the domain to different ports.

But I am doing my best not to get distracted.  Yes, I have been building a few smaller projects here and there lately, but I don’t want to get TOO distracted.

Day 58 – Bootstrap Overview

Not a lot of really say here, like the Web Foundational section, this was a chunk of the Web Dev course offered by the same Instructor.  I may look into picking it up if it’s on sale for cheap, since at this point I’ve done like half of it.  My next plan for a major learning push is going to be Javascript.  I really need to get familiar with Javascript for some work projects.  

The project was essentially using different Bootstrap bits to format a silly Tinder knockoff website, a site for Dogs called Tindog.  Bootstrap is definitely useful, but I generally try to avoid it because it makes things look very samey.  I suppose familiarity in design is useful at times though.

Day 59 and 59 – Blog Capstone Project Part 2

The last part of the Intermediate+ section was the bones of a Clean Blog running in Python Flask.  These two days expanded on the Blog concept a bit, and a few later lessons bring it back around again.  I am almost interested in trying to use the blog once finished, except that I already have plenty of proper outlets for Blogging.  Wordpress works fine.  

The core concept was using Bootstrap to format the blog up nicely, as well as setting it up to reuse Header and Footer files.  It’s a common method for webdesign, and it’s one I have been using since Geocities when I discovered something called SHTML which would let me write Blog Texts in HTML, then encapsulate them into a common top and bottom.

Day 61 – Flask Forms

An intro to easier to use Flask based forms and libraries.  The project itself was to construct a simple log in page, which lead to a Rick Roll when unlocked.  I could see this piece being useful later to add in on the more complete Blog project.

Which brings to a bit of an aside topic.  A lot fo these projects feel disconnected, but really, they are demonstrating a proper way of handling larger Code Projects.

Break it into parts.

We built a simple blog page that pulled posts from a CSV file.

We formatted that blog page.

We created a log in form, that could easily be slipped into a blog page.

Later lessons work with more dynamic forms and data and SQL Lite databases.

As the pieces get laid out, the end goal becomes more and more clear.  My prediction is that the later Blog Capstone project will amount to, “Take everything from the last ten weapons and mash it into a functioning Blog app.

Day 62 – Coffee Shop Wifi Tracker

Like I said, building on the last.  For this lesson, we built a little table that would list Coffee Shops in  pretty Bootstrap table with ratings for Coffee Quality, Wifi Quality, and Power Outlet availability.  Instead of just a form that takes an input and verifies it’s good, it actually inserts new data into the CSV that holds all the coffee shops.

Day 63 – Book Collection App

I may revisit this one later to build an app for my other collections.  And to make it more robust.  The core though is once again, building on the Coffee App, essentially.  Instead of just a tracked list with an Add Form, we added persistence by learning about SQL Lite databases.  So the data persists even if the server is shut down.  

I took a few extra non required steps on this app and reused the Coffee Shop code to make this app look much nicer.  The core app and instructions just had a very basic page with an unordered list.  I turned it into a table with a bit more formatting and color.

Day 64 – Top Ten Movies App

This whole project was essentially the same as the Book Collection App, except it was intnded from the start to look prettier.  The main difference is that the add process had a few extra steps to pull from the API of The Movie DataBase website to get data about each film, and have the user select from a list of results.  In the instructions for the class, the idea was to make multiple API calls for the list, then specific movie data, but I had already worked ahead on the project and instead it pulls the data, the user selects the movie, then it’s just done.  It was a bit complicated to get the data to pass around between pages since it was a dictionary and not just a single variable, but I got it working using Global variables.

I mostly try to avoid Global variables, but it felt like this was a good use case and made sense to use given the way Flask works.

I am thinking I may integrate the log in app created previously with this website and use it to test out mapping some of these projects to a domain to make it a for real, live website. Also, you can’t tell it in the snapshot but the posters do this cool flip over effect when you mouse over them to show more information. Not really a Python thing, but it’s still a neat effect.

Day 65 – Web Design Principles

No project for this day, it’s another slice of the Web Developer course from the same instructor.  I’m definitely going to keep an eye out for a deep discount, especially with Black Friday coming, since I’ve basically completed half of that course now.

I was hoping to fit all of this more advanced Flask based projects into one page but it feels like this post is getting pretty long so I’m going to go ahead and break it up here.

FreshRSS and RSS Feed Posts

Keen observers (ha ha ha no one reads this), might have noticed that a few posts of links showed up in the feed.  These are basically, stories I read in my RSS reader that I found interesting, and wanted to share, or at least, keep track of.  The posts as of now are a little ugly, and I’ll probably clean up the formatting over time, but I wanted to go ahead and write a bit about the process.  I’ll have the Code on Github at some point.

As for the factors, firstly, this is something I’ve wanted to have on my blog for a while.  Like a long while.  I might even try to see if there are ways to better slit up the links by topic later.  A fair number of blogs I subscribe to have these sort of link digest posts, and I’ve always just liked the idea.  It’s also good for personal reference to when I may have read something.  It is limited as it only comes from y RSS Reader.

Speaking of my RSS Reader.  I’ve moved on from TinyTinyRSS, for a few reasons.  One, the interface is a little meh, honestly.  Maybe the newer version is better but it’s only available in Docker, and Docker is such a PItA to use.  Also, while looking for alternatives, it sounds like the folks who make TTRSS are kind of a bunch of gatekeeping jerk types, and I’d rather not support that.  I also find the need to keep the update daemon running with Screen to be a pain.  So I’ve moved over to FreshRSS, which I just run locally on a Raspberry Pi.  I may move it to a publicly accessibly machine at some point, but I am not entirely convinced that TT-RSS wasn’t the entry point for my previous server malware woes.

So, like TT-RSS, Fresh RSS has a way to get an RSS feed out of your Favorited posts.  In the past I’ve used tools like IFTTT to automate posting these links around, but I don’t use IFTTT anymore for reasons I’m not going into.  Fortunately, I’ve been working to become a pretty good Python coder for the last month or so.  So instead I wrote a script.  

It’s not even a particularly complicated script.  There are only two things it really needs to do, get new articles, and then post them to WordPress. Since the script runs locally, on the same Raspberry Pi even, it easily can reach and pull the RSS feed.  One nice thing I noticed with Fresh RSS, the feed included a time interval, so just getting new posts was super simple, because the interval is just “24” for “24 hours”.  The script eventually will run on a cronjob at the exact same time daily.  Anyway, after pulling the RSS, the entries are already in an easily usable Dictionary.  which gets fed into the construction of the WordPress Post.

def get_feed(feed_url):
    NewsFeed = feedparser.parse(feed_url)
    return NewsFeed

The posting part was pretty easy as well, WordPress has an API, and Python also has a library that can use that API.  It just needs some log in information and a post payload to send.  

def make_post(NewsFeed):
    wp = Client(f'https://{wp_url}/xmlrpc.php', wp_user, wp_pass)
    post = WordPressPost()
    post.title = f"{cur_date} - Link List"
    post.terms_names = {'category': ['Link List'], 'post_tag': ['links', 'FreshRSS']}
    post.content = f"<p>Blogging Intensifies Link List for {cur_date}</p>"
    for each in NewsFeed.entries:
        post.content += f'{each.published[5:-15].replace(" ", "-")} - <a href="{each.links[0].href}">{each.title}</a></p>'

The trickiest part was formatting the date a bit prettier.  I mentioned cleaning up the formatting a bit, I’m thinking maybe a simple invisible table, so the date and the links don’t wrap oddly like they do now.   i also added a check that if there are no new favorited posts, it will skip making a post.  Otherwise I’ll end up with empty posts on days I forget to check my feed reader

While writing the script, at first I was just outputting a text copy of the post to the console until satisfied.  Eventually, I pushed out a real post, then verified that things worked.  The next day, was just a straight test by opening the project, then running it again.  The third day, I copied the files and installed the lobraries needed, then posted from the Pi.  Phase 4 of this will be to set up Cron to run it automatically.  If that works then it will certainly, “just run” for the foreseeable future.

100 Days of Python, Projects 54-57 #100DaysofCode

Back to web development again, but with a different twist this time.  Instead of scraping things, we’re learning Flask, to produce little Python based Websites.  In doing these exercises, I find I am kind of wondering why one would use Python over say, Apache, or NGINX or even IIS.  I can sort of see where it’s useful, and maybe later we will get to more of it’s usefulness.  My primary issue is that the HTML code part of it ends up being VERY specifically Flask based.  Like flask looks for images and CSS in specific folders.  Plus if you use any sort of variables, they all get passed to the HTML in a very particular way.

I had considered that it might be useful for sharing some of the code I have written through my web server, but in my research, things like Tkinter and Turtle don’t work at all through Flask.  I was kind of hoping it was smart enough to produce little Browser pop ups or something to render the graphics out.

This section isn’t super complex so far, but it wraps up the Intermediate+ section with a little interlude for Bootstrap in between, so I figure it’s a good little chunk to keep in it’s own write up.

As usual, the code is all on Github.

Day 54 – Intro to Flask

There was literally no project today really.  We created a basic “Hello World” Flask server, then created some Decorator Functions.  It was interesting, but not really that exciting to write up.  I do somewhat question the usefulness of a Decorator a bit, versus just having a function that takes an input and modifies it directly.

## Day 55 – Higher Lower Game Returns

The Day 55 Lessons were a bit better.  We covered Decorators a bit more and how to handle URLs in Flask, which brings me back to the “Is this better” I mentioned in the opening, since once again, the code will get weird to use outside of Flask.

I had a lot of fun with the project though.  It’s a web version of the “Higher-Lower” Game from way back on Day 14.  You Pick a number, it tells you if it’s higher or lower, only with web pages.  It was essentially a way to learn about using Dynamic URLs in Flask, but spiced up for fun.  I added a nav bar to mine so the user didn’t have to type a URL and could just click the next number to guess.  I also used a bunch of silly GIFs from my favorite musicians instead of Cat GIFs on each page.  

It’s kind of useless, but it was fun to build.

Day 56 – Personal Website

This day was mostly about how to quickly import existing code to Flask.  It involved a couple of practice projects and a “real” project.  The first Practice was taking the Lesson 41-44 website and importing it to Flask.

The second practice was to use someone else’s template and import it to Flask, as well as modifying and simplifying that code.

The final project was to build a simple “Name Card” website with some social links.  Essentially, it was a repeat of the second practice, but actually replacing images and information.  I kind of prefer the previously made CV website and it’s easier to hose on the web so I’m going to stick with that for now.

Day 57 – Blog Capstone Project Part 1

This project picks up in Day 59 with the start of the Advanced Section of the course.  The basic idea here was to build a simple blog interface that would read some generic JSON Posts and display them, and then let users click into each blog post to read more.

I’m particularly proud of my result, which only uses one HTML file, that varies if the user clicked on a blog post or not.  I feel like it was a pretty slick solution.  The starter files also included a file to make a “Post class”.  Using this class was not part of the assignment, but I suspect it will come up later, so I went ahead and built it, though I didn’t use it to read the blog posts.

If this comes out alright, I may actually use it somewhere, I’ve been looking for something to put on Joshmiller.net.  Though I also don’t really NEED another Blog outlet.  I barely maintain the one I regularly use now.

100 Days of Python, Projects 51-53 #100DaysofCode

Here we are now with a few more automated bot tasks.  It’s been a fun series of lessons, though I enjoyed using Beautiful Soup more then Selenium.  Selenium runs into too many anti-bot measures on the web to be truly effective.  I mean, it’s definitely a useful too, but in my experience, it’s not reliable enough.  BS seems to be much more effective, though it can’t really interact with pages.

In the long run, I think I am more just irritated by “clever bull shit” on web pages that makes both pieces of software a pain to work with. Take Instagram, none of the classes or ids are anything but jumbled characters.  The code feels like it was written by a machine, and it probably was.

Also, this round is a bit shorter than before because the course is veering off into a new direction with Flask Apps, so it seemed appropriate to wrap things up on the Automation Section of the Projects.

Day 51 – Twitter Speed Complainer

This project is great, because this is something I have tried to run from other people’s code but it never seems to actually work.  Now, I just have my own code to run.

EZ Mode.

It will need something with a desktop to run it on, but I have a while Windows PC for running random shit and a mess of Raspberry Pis.  I don’t even care about the complaining part, in fact, I would rather not, I just want to track Internet speed.  I may even change this ti push to a spread sheet or database or something later.

But for now, it Tweets.

So, the Speed test part was easy, though I used SpeedOf.me instead of SpeedTest.net, because SpeedTest.net supposedly will give dodgy numbers by partnering with ISPs and putting servers in ISP data centers.  I just prefer SpeedOf.me mostly, it’s cleaner.

The Twitter part was tricky… ish…  So, a common problem I keep running into with Selenium, is it thinks my Bot Programs, are Bots.  

I’m so offended for my Bots, accusing them of being Bots.   They run into captchas and email verifications and just flat out fail to log in or load half the time. It makes sense, captchas and email verifying exist, 100% to stop people from abusing things like, Selenium. Fortunately, Twitter Bots is one thing I do have a fair amount if experience with. I wrote one ages ago that just tweeted uptime of the server.  I wrote one script that would pull lines from a text file and tweet them out at an interval.  I have another Python based bot that tweets images.  What do these Bots do differently?  They are 100% Bots, running with the proper Twitter Bot Based AI, and labeled as such.  

So, since Selenium was being a pain to deal with using Twitter, I pulled out my Image Posting Bot code and scavenged out the pieces I needed, which was about 4 or 5 lines of code.  It uses a Python Library called Tweepy.  In order to use Tweepy, you have to use the Twitter Developer console to get API Keys, which I already had.

Day 52 – Instagram Follower

Another almost useful project. For this project, you open up Instagram and log in, then it opens an account of your choosing, and follows, anyone following that account.

Now, while I have a love/hate relationship with Instagram, I am not super interested in cluttering up my feed with thousands of accounts.  So, while I did complete the task, I set it up to ONLY follow the first 10 accounts.   I also added a check to make sure I wasn’t already following said account.

I may revisit this again later with other, more useful ways to interact with IG.  Maybe instead of following random people from another account, it auto follows back.  Or maybe it goes through “suggested” and looks for keywords in a person’s profile and follows them.

Day 53 – Zillow Data Aggregator Capstone

The final project for this section combines Selenium and Beautiful Soup to aggregate real estate listings from Zillow into a Google Spreadsheet doc.  I quite liked this one actually, it’s straight forward and relatively harmless.  I did run into an issue where it started thinking I was a Bot, but by that point, I knew I could successfully scrape what I needed from Zillow, so I commented out the Zillow call and replaced it with a file load using an HTML file snapshot of the Zillow page.

This was very easy to slide in as a fix because I was already pulling the source code using Selenium into a variable, then passing that variable to Beautiful Soup.  It was simply a matter of passing the file read instead.

Scraping the data itself was a bit tricky, Zillow seems to do some funny dynamic loading so my number of listings and addresses and prices didn’t always match.  To solve this, I added a line that just uses whichever value is the smallest.  They seem to capture in order, but eventually, some fell off, so if I got 8 prices and 10 addresses, I just took the first 8 of each.

Another issue I came across, the URLs for each listing, don’t always have a full URL.  Sometimes you had to add “https://www.zillow.com” to the front.  It wasn’t a hard fix,

if “zillow” not in link:

link = “https://www.zillow.com”+link

There was also an issue with the links because each link shows up twice using the scrape I was using.  A quick search gave a clever solution to remove duplicates.  It’s essentially:  list = dictionary converted to list(list converted to dictionary).  A Dictionary can’t have duplicate keys, so those get discarded converting the list to a dictionary, and then that result just gets flatted back out into a dictionary.

Lastly was the form entry itself.  The Data Entry uses a method I’ve used before for entering data to Google remotely, with Google Forms.  Essentially, Selenium fills out and submits the form over and over for each result.  I had a bit of issue here because the input boxes uses funny tags and are hard to target directly.  Then my XPATHs were not working properly.  I fixed this by adding two things, one, I had Selenium open the browser maximized, to make sure everything loaded.  Second, I added more sleep() delays here and there, to make sure things loaded all the way.  

One thing I have found working with Selenium, you can never have too many sleep()s.  The web can be a slow place.

100 Days of Python, Projects 45-50 #100DaysofCode

Things are continuing to be interesting and useful here with the introduction of Beautiful Soup, a tool used to parse unstructured data into usable structured data.  Well, more or less that’s what it does.  Useful for parsing through Scraped Web Page data that does not have it’s own API available.

As normal, everything is on GitHub.

Day 45 – Must Watch Movies List and Hacker News Headline Scraper

As an introduction to using the tool, Beautiful Soup, we had two simple projects.  The training project actually feels more useful than the official project of the day, though I also remixed the training project a bit.

The “Project of the Day” was to scrape the Empire Magazine top 100 Must watch movies and output them to a text file.  I am pretty sure this list does not change regularly and this it’s sort of a “one and done” run.

The trainer project was more interesting, because it scraped the news headlines from Hacker News, a Reddit-like site centered around coding and technology that is absolutely bare bones in it’s interface.  The course notes were just to get the “top headline of the day”, but I modified mine to give a list of all headlines and links.  I will probably combine this with the previously covered email tools to get a digest of stories each day emailed to myself.

Day 46 – Spotify Musical Time Machine

This one combines the web scraping with the use of APIs which was covered previously.  Specifically the Spotify API.  The object is to get the user to input a date, then scrape the Billboard Top 100 for that date and create a Spotify Playlist based on the return.

This one was actually tricky and, as I often do, I added a bit to keep it robust.  Firstly I created a function to verify if the entered date was, in fact, a valid date.  Knowing my luck, there is a function that does this in Date Time, but writing it up was fun.  It could be better though, it only verifies if the day is between 1 and 31, for example.  Something I may clean up later I think.

The real tricky part was dealing with the scraping.  Billboard’s tables are not very clean and not really scrap-able.  I had come up with a way to get all the Song Titles, but the resulting list was full of garbage data.  I set about collecting the garbage data out by filtering the results list through a second list of keywords, but I noticed someone int he comments had found a simple solution of using Beautiful Soup to search for “li” (list items) with an “h3” (heading 3), which easily returned the proper list.  

So I tried the same for the artists, filter by “li” then by “span” which …. returned 900 items.  So I added another filter on the class used by the “span” containing the artist, which did not help at all.  Fortunetely, I already had solved this problem before while working on the Song List.  I created a list of keywords and phrases to filter, then ran my result across it, eventually I was able to output 100 sets of “Song Title – Artist name”.

The real tricky part was using the Spotify API.   Ohhhh boy what a mess.  There seems to be several ways to authenticate, and they don’t work together, and the API Documentation for Spotify and SpotiPy are neither amazing. It took a lot of digging on searches and testing to get the ball rolling, then some more help with code around the web.  But hey, that’s part of what coding is, “Making it work”.

The first issue was getting logged in, which meant using OATH and getting a special auth token, which Spotipy would use to authenticate with.  

The second issue, once that was working, was to create the playlist, which didn’t end up being too hard, just one line of Spotipy code and output the goof ID key to a variable from the response. Still, I deleted so many “Test List” playlists from my account.

So, the real tricky part, was that Spotify doesn’t work super great if you just search with “artist” and “track”.  Instead you get the ID of the artist, then search within that artist for the track, which works much more smoothly.  Why? To add tracks to a playlist, you add them by Spotify IDs.  Thankfully, I could throw a whole List of them up at once.

The end result works pretty flawlessly though, which is cool.  Though It also shows some of the holes in the Spotify Catalogue as you get into older tracks.  My playlist for my birthday, in 1979, is missing 23 tracks out of 100.

Also, I may look into if there is an API for Amazon Music int he future, since that is what I use instead of Spotify, sometimes.

Day 47 – Amazon Price Tracker

Ok, this one will actually be useful to me in the long run.  Like actually useful.  I already use sites like Camel Camel Camel but running my own tracker would be even better.  Especially because one of my other primary hobbies is collecting Plastic Crack (toys).  Geting deals on things is definitely useful, especially given how expensive things are these days.

Also, I have not found a good way to monitor for sales/price drops on eBooks, which is another advantage to straight scraping web pages.  

So I even added to this one a bit.  Instead of looking for one item, it reads links and desired proces from a text file.  Now, if you look at the code, it probably could be cleaned up with a better import, treating it as a CSV instead of raw text, but I wanted to keep things as simple as possible for anyone who might run this script to monitor proces.  It’s just “LINK,PRICE”.  Easy, simple.

Day 48 – Selenium Chrome Driver

The Day 48 Lesson was an intro to the Selenium Chrome Driver software.  This is a bridge tool, that I imagine can connect to many languages, but in this case we used Python, that can open it’s own dummy web browser window, then read and interact with it.  

So the first bit was just some general example, followed by actually using it to pull the events list from Python.org and dump them into a dictionary.  I could actually see this being useful for various sites because so few sites have easy to find calendar links for events.  I’m sure there is some way to add calendar events to a calendar with Python.  Just one for the “future projects” list.

Afterwards we learned about some interaction with Selenium, filling in forms and clicking links to navigate Wikipedia.  

Finally the day’s project was to automate playing a Cookie Clicker game.  These “Clicker” games are pretty popular with some folks and basically amount to clicking an object as quickly as possible.  The game includes some upgrades and the assignment itself was pretty open on how to handle upgrades.  There was a sort of side challenge to see who would get the highest “Cookies Per Second”.  I set mine up to scan the prices each round and if something could be bought, buy it.  This got me up to about 50 CPS after 5 minutes.  It could be better.  I may go back and adjust it to stop buying lower levels once a higher level can be bought, which I think might be a better method.  Why buy Grandmas when you could buy Factories.

Day 49 – Linked in Job Applier

So, I completely overhauled this one, but kept it in the spirit of things, because the point is more to practice using Selenium in more complex ways.  The original objective, was to make a bot that would open LinkedIn, sign in to your account, go to the Jobs Page, search for “Python Developer”, find jobs with “Easy Apply” and click through the Apply Process.

I am not in the market for a job, so applying for random jobs seems like a dumb idea.  I also use 2-Factor on my LinkedIn account, so logging in automatically would be quite impossible.  It was suggested to make a “Fake Account” to get around this but that seems a bit rude.  It also suggested simply following companies instead of applying, but I’d rather not clutter up my feed with weird false signals.

So instead…

My Bot will open LinkedIn, go to to the Jobs Page for each term in an array of job terms individually, (for the test I used “Python Developer” and “Java Developer”).  Then it takes those results, strips out the Company Name and URL to the Job Opening, and compiles them into an email digest that it sends out.  

One issue I did have is that LinkedIn apparently uses different CSS for Chrome versus Firefox, because I was just NOT getting the results back for the links to each job, and it turns out the link bit has a different Class in Chrome, which Selenium was using, than Firefox, which I was using to inspect code (and use as my browser).

Anyway, it works in the spirit of what was trying to be accomplished, without actually passing any real personal data along.

Day 50 – Tinder Auto Swiper

So, I am really not in the market to use Tinder at all.  I was going to just skip this one.

Then I decided, “You know what, I can make a fake profile with a “https://www.thispersondoesnotexist.com/” profile.

But then it seems dumb to get people to match with a bot.

No wait, I can set up the bot to Reject everyone, swipe, whatever direction “reject” is.  No matches!

Oh, it needs a log in via Google, Facebook, or Phone Number.  Never mind.

No wait, I have some old Facebook Profiles for a couple of my cats, I will just use one of those to log in with!

Oh, it still wants a phone number.

So anyway, I decided even trying to fake it was not worth the trouble.   But hey, Halfway there!