#100DaysOfCode

100 Days of Python, Projects 54-57 #100DaysofCode

Back to web development again, but with a different twist this time.  Instead of scraping things, we’re learning Flask, to produce little Python based Websites.  In doing these exercises, I find I am kind of wondering why one would use Python over say, Apache, or NGINX or even IIS.  I can sort of see where it’s useful, and maybe later we will get to more of it’s usefulness.  My primary issue is that the HTML code part of it ends up being VERY specifically Flask based.  Like flask looks for images and CSS in specific folders.  Plus if you use any sort of variables, they all get passed to the HTML in a very particular way.

I had considered that it might be useful for sharing some of the code I have written through my web server, but in my research, things like Tkinter and Turtle don’t work at all through Flask.  I was kind of hoping it was smart enough to produce little Browser pop ups or something to render the graphics out.

This section isn’t super complex so far, but it wraps up the Intermediate+ section with a little interlude for Bootstrap in between, so I figure it’s a good little chunk to keep in it’s own write up.

As usual, the code is all on Github.

Day 54 – Intro to Flask

There was literally no project today really.  We created a basic “Hello World” Flask server, then created some Decorator Functions.  It was interesting, but not really that exciting to write up.  I do somewhat question the usefulness of a Decorator a bit, versus just having a function that takes an input and modifies it directly.

## Day 55 – Higher Lower Game Returns

The Day 55 Lessons were a bit better.  We covered Decorators a bit more and how to handle URLs in Flask, which brings me back to the “Is this better” I mentioned in the opening, since once again, the code will get weird to use outside of Flask.

I had a lot of fun with the project though.  It’s a web version of the “Higher-Lower” Game from way back on Day 14.  You Pick a number, it tells you if it’s higher or lower, only with web pages.  It was essentially a way to learn about using Dynamic URLs in Flask, but spiced up for fun.  I added a nav bar to mine so the user didn’t have to type a URL and could just click the next number to guess.  I also used a bunch of silly GIFs from my favorite musicians instead of Cat GIFs on each page.  

It’s kind of useless, but it was fun to build.

Day 56 – Personal Website

This day was mostly about how to quickly import existing code to Flask.  It involved a couple of practice projects and a “real” project.  The first Practice was taking the Lesson 41-44 website and importing it to Flask.

The second practice was to use someone else’s template and import it to Flask, as well as modifying and simplifying that code.

The final project was to build a simple “Name Card” website with some social links.  Essentially, it was a repeat of the second practice, but actually replacing images and information.  I kind of prefer the previously made CV website and it’s easier to hose on the web so I’m going to stick with that for now.

Day 57 – Blog Capstone Project Part 1

This project picks up in Day 59 with the start of the Advanced Section of the course.  The basic idea here was to build a simple blog interface that would read some generic JSON Posts and display them, and then let users click into each blog post to read more.

I’m particularly proud of my result, which only uses one HTML file, that varies if the user clicked on a blog post or not.  I feel like it was a pretty slick solution.  The starter files also included a file to make a “Post class”.  Using this class was not part of the assignment, but I suspect it will come up later, so I went ahead and built it, though I didn’t use it to read the blog posts.

If this comes out alright, I may actually use it somewhere, I’ve been looking for something to put on Joshmiller.net.  Though I also don’t really NEED another Blog outlet.  I barely maintain the one I regularly use now.

100 Days of Python, Projects 51-53 #100DaysofCode

Here we are now with a few more automated bot tasks.  It’s been a fun series of lessons, though I enjoyed using Beautiful Soup more then Selenium.  Selenium runs into too many anti-bot measures on the web to be truly effective.  I mean, it’s definitely a useful too, but in my experience, it’s not reliable enough.  BS seems to be much more effective, though it can’t really interact with pages.

In the long run, I think I am more just irritated by “clever bull shit” on web pages that makes both pieces of software a pain to work with. Take Instagram, none of the classes or ids are anything but jumbled characters.  The code feels like it was written by a machine, and it probably was.

Also, this round is a bit shorter than before because the course is veering off into a new direction with Flask Apps, so it seemed appropriate to wrap things up on the Automation Section of the Projects.

Day 51 – Twitter Speed Complainer

This project is great, because this is something I have tried to run from other people’s code but it never seems to actually work.  Now, I just have my own code to run.

EZ Mode.

It will need something with a desktop to run it on, but I have a while Windows PC for running random shit and a mess of Raspberry Pis.  I don’t even care about the complaining part, in fact, I would rather not, I just want to track Internet speed.  I may even change this ti push to a spread sheet or database or something later.

But for now, it Tweets.

So, the Speed test part was easy, though I used SpeedOf.me instead of SpeedTest.net, because SpeedTest.net supposedly will give dodgy numbers by partnering with ISPs and putting servers in ISP data centers.  I just prefer SpeedOf.me mostly, it’s cleaner.

The Twitter part was tricky… ish…  So, a common problem I keep running into with Selenium, is it thinks my Bot Programs, are Bots.  

I’m so offended for my Bots, accusing them of being Bots.   They run into captchas and email verifications and just flat out fail to log in or load half the time. It makes sense, captchas and email verifying exist, 100% to stop people from abusing things like, Selenium. Fortunately, Twitter Bots is one thing I do have a fair amount if experience with. I wrote one ages ago that just tweeted uptime of the server.  I wrote one script that would pull lines from a text file and tweet them out at an interval.  I have another Python based bot that tweets images.  What do these Bots do differently?  They are 100% Bots, running with the proper Twitter Bot Based AI, and labeled as such.  

So, since Selenium was being a pain to deal with using Twitter, I pulled out my Image Posting Bot code and scavenged out the pieces I needed, which was about 4 or 5 lines of code.  It uses a Python Library called Tweepy.  In order to use Tweepy, you have to use the Twitter Developer console to get API Keys, which I already had.

Day 52 – Instagram Follower

Another almost useful project. For this project, you open up Instagram and log in, then it opens an account of your choosing, and follows, anyone following that account.

Now, while I have a love/hate relationship with Instagram, I am not super interested in cluttering up my feed with thousands of accounts.  So, while I did complete the task, I set it up to ONLY follow the first 10 accounts.   I also added a check to make sure I wasn’t already following said account.

I may revisit this again later with other, more useful ways to interact with IG.  Maybe instead of following random people from another account, it auto follows back.  Or maybe it goes through “suggested” and looks for keywords in a person’s profile and follows them.

Day 53 – Zillow Data Aggregator Capstone

The final project for this section combines Selenium and Beautiful Soup to aggregate real estate listings from Zillow into a Google Spreadsheet doc.  I quite liked this one actually, it’s straight forward and relatively harmless.  I did run into an issue where it started thinking I was a Bot, but by that point, I knew I could successfully scrape what I needed from Zillow, so I commented out the Zillow call and replaced it with a file load using an HTML file snapshot of the Zillow page.

This was very easy to slide in as a fix because I was already pulling the source code using Selenium into a variable, then passing that variable to Beautiful Soup.  It was simply a matter of passing the file read instead.

Scraping the data itself was a bit tricky, Zillow seems to do some funny dynamic loading so my number of listings and addresses and prices didn’t always match.  To solve this, I added a line that just uses whichever value is the smallest.  They seem to capture in order, but eventually, some fell off, so if I got 8 prices and 10 addresses, I just took the first 8 of each.

Another issue I came across, the URLs for each listing, don’t always have a full URL.  Sometimes you had to add “https://www.zillow.com” to the front.  It wasn’t a hard fix,

if “zillow” not in link:

link = “https://www.zillow.com”+link

There was also an issue with the links because each link shows up twice using the scrape I was using.  A quick search gave a clever solution to remove duplicates.  It’s essentially:  list = dictionary converted to list(list converted to dictionary).  A Dictionary can’t have duplicate keys, so those get discarded converting the list to a dictionary, and then that result just gets flatted back out into a dictionary.

Lastly was the form entry itself.  The Data Entry uses a method I’ve used before for entering data to Google remotely, with Google Forms.  Essentially, Selenium fills out and submits the form over and over for each result.  I had a bit of issue here because the input boxes uses funny tags and are hard to target directly.  Then my XPATHs were not working properly.  I fixed this by adding two things, one, I had Selenium open the browser maximized, to make sure everything loaded.  Second, I added more sleep() delays here and there, to make sure things loaded all the way.  

One thing I have found working with Selenium, you can never have too many sleep()s.  The web can be a slow place.

100 Days of Python, Projects 45-50 #100DaysofCode

Things are continuing to be interesting and useful here with the introduction of Beautiful Soup, a tool used to parse unstructured data into usable structured data.  Well, more or less that’s what it does.  Useful for parsing through Scraped Web Page data that does not have it’s own API available.

As normal, everything is on GitHub.

Day 45 – Must Watch Movies List and Hacker News Headline Scraper

As an introduction to using the tool, Beautiful Soup, we had two simple projects.  The training project actually feels more useful than the official project of the day, though I also remixed the training project a bit.

The “Project of the Day” was to scrape the Empire Magazine top 100 Must watch movies and output them to a text file.  I am pretty sure this list does not change regularly and this it’s sort of a “one and done” run.

The trainer project was more interesting, because it scraped the news headlines from Hacker News, a Reddit-like site centered around coding and technology that is absolutely bare bones in it’s interface.  The course notes were just to get the “top headline of the day”, but I modified mine to give a list of all headlines and links.  I will probably combine this with the previously covered email tools to get a digest of stories each day emailed to myself.

Day 46 – Spotify Musical Time Machine

This one combines the web scraping with the use of APIs which was covered previously.  Specifically the Spotify API.  The object is to get the user to input a date, then scrape the Billboard Top 100 for that date and create a Spotify Playlist based on the return.

This one was actually tricky and, as I often do, I added a bit to keep it robust.  Firstly I created a function to verify if the entered date was, in fact, a valid date.  Knowing my luck, there is a function that does this in Date Time, but writing it up was fun.  It could be better though, it only verifies if the day is between 1 and 31, for example.  Something I may clean up later I think.

The real tricky part was dealing with the scraping.  Billboard’s tables are not very clean and not really scrap-able.  I had come up with a way to get all the Song Titles, but the resulting list was full of garbage data.  I set about collecting the garbage data out by filtering the results list through a second list of keywords, but I noticed someone int he comments had found a simple solution of using Beautiful Soup to search for “li” (list items) with an “h3” (heading 3), which easily returned the proper list.  

So I tried the same for the artists, filter by “li” then by “span” which …. returned 900 items.  So I added another filter on the class used by the “span” containing the artist, which did not help at all.  Fortunetely, I already had solved this problem before while working on the Song List.  I created a list of keywords and phrases to filter, then ran my result across it, eventually I was able to output 100 sets of “Song Title – Artist name”.

The real tricky part was using the Spotify API.   Ohhhh boy what a mess.  There seems to be several ways to authenticate, and they don’t work together, and the API Documentation for Spotify and SpotiPy are neither amazing. It took a lot of digging on searches and testing to get the ball rolling, then some more help with code around the web.  But hey, that’s part of what coding is, “Making it work”.

The first issue was getting logged in, which meant using OATH and getting a special auth token, which Spotipy would use to authenticate with.  

The second issue, once that was working, was to create the playlist, which didn’t end up being too hard, just one line of Spotipy code and output the goof ID key to a variable from the response. Still, I deleted so many “Test List” playlists from my account.

So, the real tricky part, was that Spotify doesn’t work super great if you just search with “artist” and “track”.  Instead you get the ID of the artist, then search within that artist for the track, which works much more smoothly.  Why? To add tracks to a playlist, you add them by Spotify IDs.  Thankfully, I could throw a whole List of them up at once.

The end result works pretty flawlessly though, which is cool.  Though It also shows some of the holes in the Spotify Catalogue as you get into older tracks.  My playlist for my birthday, in 1979, is missing 23 tracks out of 100.

Also, I may look into if there is an API for Amazon Music int he future, since that is what I use instead of Spotify, sometimes.

Day 47 – Amazon Price Tracker

Ok, this one will actually be useful to me in the long run.  Like actually useful.  I already use sites like Camel Camel Camel but running my own tracker would be even better.  Especially because one of my other primary hobbies is collecting Plastic Crack (toys).  Geting deals on things is definitely useful, especially given how expensive things are these days.

Also, I have not found a good way to monitor for sales/price drops on eBooks, which is another advantage to straight scraping web pages.  

So I even added to this one a bit.  Instead of looking for one item, it reads links and desired proces from a text file.  Now, if you look at the code, it probably could be cleaned up with a better import, treating it as a CSV instead of raw text, but I wanted to keep things as simple as possible for anyone who might run this script to monitor proces.  It’s just “LINK,PRICE”.  Easy, simple.

Day 48 – Selenium Chrome Driver

The Day 48 Lesson was an intro to the Selenium Chrome Driver software.  This is a bridge tool, that I imagine can connect to many languages, but in this case we used Python, that can open it’s own dummy web browser window, then read and interact with it.  

So the first bit was just some general example, followed by actually using it to pull the events list from Python.org and dump them into a dictionary.  I could actually see this being useful for various sites because so few sites have easy to find calendar links for events.  I’m sure there is some way to add calendar events to a calendar with Python.  Just one for the “future projects” list.

Afterwards we learned about some interaction with Selenium, filling in forms and clicking links to navigate Wikipedia.  

Finally the day’s project was to automate playing a Cookie Clicker game.  These “Clicker” games are pretty popular with some folks and basically amount to clicking an object as quickly as possible.  The game includes some upgrades and the assignment itself was pretty open on how to handle upgrades.  There was a sort of side challenge to see who would get the highest “Cookies Per Second”.  I set mine up to scan the prices each round and if something could be bought, buy it.  This got me up to about 50 CPS after 5 minutes.  It could be better.  I may go back and adjust it to stop buying lower levels once a higher level can be bought, which I think might be a better method.  Why buy Grandmas when you could buy Factories.

Day 49 – Linked in Job Applier

So, I completely overhauled this one, but kept it in the spirit of things, because the point is more to practice using Selenium in more complex ways.  The original objective, was to make a bot that would open LinkedIn, sign in to your account, go to the Jobs Page, search for “Python Developer”, find jobs with “Easy Apply” and click through the Apply Process.

I am not in the market for a job, so applying for random jobs seems like a dumb idea.  I also use 2-Factor on my LinkedIn account, so logging in automatically would be quite impossible.  It was suggested to make a “Fake Account” to get around this but that seems a bit rude.  It also suggested simply following companies instead of applying, but I’d rather not clutter up my feed with weird false signals.

So instead…

My Bot will open LinkedIn, go to to the Jobs Page for each term in an array of job terms individually, (for the test I used “Python Developer” and “Java Developer”).  Then it takes those results, strips out the Company Name and URL to the Job Opening, and compiles them into an email digest that it sends out.  

One issue I did have is that LinkedIn apparently uses different CSS for Chrome versus Firefox, because I was just NOT getting the results back for the links to each job, and it turns out the link bit has a different Class in Chrome, which Selenium was using, than Firefox, which I was using to inspect code (and use as my browser).

Anyway, it works in the spirit of what was trying to be accomplished, without actually passing any real personal data along.

Day 50 – Tinder Auto Swiper

So, I am really not in the market to use Tinder at all.  I was going to just skip this one.

Then I decided, “You know what, I can make a fake profile with a “https://www.thispersondoesnotexist.com/” profile.

But then it seems dumb to get people to match with a bot.

No wait, I can set up the bot to Reject everyone, swipe, whatever direction “reject” is.  No matches!

Oh, it needs a log in via Google, Facebook, or Phone Number.  Never mind.

No wait, I have some old Facebook Profiles for a couple of my cats, I will just use one of those to log in with!

Oh, it still wants a phone number.

So anyway, I decided even trying to fake it was not worth the trouble.   But hey, Halfway there!

100 Days of Python, Projects 23-31 #100DaysofCode

In case anyone is keeping track (which they are not), you might notice that one, I’ve not been posting these posts in “real time”.  I’ve been writing them later, after the fact.  I kind of hope to change that.  Also, I started on September 12th, and my last post was on September 28th, which is 16 days, though the last post went through Day 22.  I’m actually moving faster than “100 Days” on my “100 Days of Code”.

This is intentional, I want to do Advent of Code again this year, starting December 1st. So I want to hopefully finish this up before December, so I am not overlapping these two daily level coding projects.  I just, don’t have the time to do both.  Based on my calculations, I need to get 18 projects ahead to end on November 30th.

Anyway, this round I want to wrap up the “Intermediate” level projects.  This one may be a bit boring, just because it is a lot more “Basic Data Analysis” projects than “Cool Retro Game remakes”.

Day 23 – Turtle Crossing or Crossey Turtle

I was a little disappointed because the description sounded like this was going to be a “remake” of Frogger, only with Turtles.  Instead it’s a remake-ish, of Crossey Road.  Which is slightly less exciting.  I feel like I could probably make my own Frogger, and I will probably put that on a ToDo list somewhere to forget about because all my ToDo lists are 1000 items long….

Anyway, the job is to get the turtle across by avoiding the cars, which randomly spawn on one side of the screen.  Each time the turtle makes it, the cars start moving slightly faster.  It was an interesting project to develop, and like Snake, I cleaned it up a bit from the instructor’s code by aligning the cars and turtle to lines.  The main this this does it helps to remove the ugly overlap with a pure random spawn.  I considered adding some code to stop them from spawning on top of each other in the same line but decided I didn’t care enough to bother.  

The trickiest part of this was adjusting the delays and spawn timer so things felt right.  With a spawn timer too low, then cars were just pouring out.  With it too high then the road is just wide open.  Another issue I had briefly, when the cars would speed up, only the currently on screen cars would speed up, everything new would still run slow.

In general I don’t really care for this game, primarily I think because the game actually gets easier the faster the cars get because the gaps become huge and easy to pass through.  It could be adjusted a bit probably to spawn more cars at higher speeds which may fix this.

Day 24 – Improved Snake Game and Mail Merge

Today’s lesson had to do with file I/O.  I’m actually already pretty familiar with this, a lot of the scripts I have written in my free time take a file in, manipulate the data in some way, and spit a file out.  

Step one was to modify the Snake Game to have a persistent high score written to a file.  Which was neat.

The second was a simple Mail Merge exercise.  Take in a letter, take in a list of names, output a series of letters with the names inserted.

Day 25 – US State naming Game

The exercise today was an introduction to Pandas, which is a tool that makes working with large data sets much easier.  In the first round of practice, we pull data from a large csv of Squirrel Data and count Squirrel Colors from here: https://data.cityofnewyork.us/Environment/2018-Central-Park-Squirrel-Census-Squirrel-Data/vfnx-vebw which was silly and fun.

The second was a little learning game where the player names states of the United States.  Each correct guess shows on the map.  When you type “Exit” you get a file back of which states were missed.  One of the interesting things here was adding a background image to the Turtle Screen.  I also tried to build it in a way that it would be easy to say, swap out the background image and data set for European Countries, or anything similar.

On a total side note, I got all 50 states on my first try.  I am pretty sure I know all the capitols still as well.

Day 26 – List and Dictionary Comprehension and NATO Alphabet Translator

Probably one of the quickest programming days but longer lessons page with several simple exercises.  It also felt like one of the more “necesary in the long run” lessons.  Basically, the whole lesson was about better ways to iterate through data.  Single line codes like “list = [item for item in other_list]”.

The end project was very short, pull in a CSV of NATO Phonetics using Pandas, then one line iterate it into a dictionary.  Ask for a user input, loop through the user input to make a list of words, output the list.

Day 27 – Miles to Kilometers Conversion Project and TKinter

So on the surface, a Miles to Kilometers converter isn’t that exciting.  In fact, it’s a straight forward multiplication/division that even the most beginner level coder could whip up.  The point of this lesson was more of an introduction to TKinter.  

The end result is a little window based converter, that in my case, goes both ways, so you can do Miles to KM or KM to Miles, which is probably boring but wasn’t part of the requirement for the project.

Day 28 – Pomodoro Timer

The object of this lesson was to build a Pomodoro Method Timer.  I don’t super follow “productivity” methods but I guess the Pomodoro Method is to use a timer, work for a period, take a short break, repeat this pattern a few times, then take a longer break.  I don’t know if these times are set in stone, but for the project we used 25 minutes of work, 5 minutes of break, repeated 3 times, and then 20 minutes of long break.

The actual lesson here is how to create a TKinter window that is “constantly running”.  While able to take some input, and manipulating various bits of the UI.  It counts down a timer, it adds check marks every round, it had a start and reset button.  

This one was actually pretty tricky, and I kept ending up with essentially several layered running timers because I was using the window.after function improperly at first.  Otherwise it’s just a loop that runs through some conditionals to see where it’s at in the loop.  I did manage to do like 90% before watching the videos, but it took me an extra session of code work to puzzle it out.  The one bit I wasn’t sure on was how to make the Reset button actually top the timer loop, which wasn’t complex, but I did have to get that one from the class.

Day 29 – Password Manager Part 1

So, I really liked this project, and I need to find out how to make exe files out of these pieces of code.  I actually would definitely use this project regularly, especially after adding some additional features, which may in fact come in tomorrow’s part 2.  This was also another project that I essentially completely built before actually watching the lessons.  Which I suppose means the previous lessons on how to use TKinter worked well.  I will say, the most annoying part of TKInter is just how finickey the positioning can get,  The instructor likes using Grid() to place but personally, I think I like using Place(), but the problem is, I could get caught up tweaking Place() for hours.

This project so far has combined several previous projects, from the Password Generator, to using TKinter, to writing output files.  I suspect tomorrow will add in even more with Reading Files, and working with “organized data” in the process.  I could also see using Ciphers to actually encrypt the output data file to something that’s not readable without a master password.

My favorite part was revisiting the “Password Generator” code created back on Day 5.  In addition aot simplifying it with some list comprehension, I also converted it to be an Object Oriented Class, that could be imported and called. Not super difficult but I still felt pretty proud of that one. It doesn’t really add a lot of benefit, but it was more just an exercise for my own ability.  I also made it easy to change the default email address by putting it in a separate config style file that I may work with more later.

Day 30 – Password Manager Part 2

Most of this day’s lesson was using error handling, specifically, “try:”, “except:”, “else:” and “finally:”.  I kind of figured there was a better way of handling “random errors” than painstakingly considering every option, but I’ve never really had a need to look into it too super deeply.  Usually for error handling, I’d just stick an “if” into a loop, and if the “if” fails, it just keeps looping until the input is good.  

I did this a lot early on, on my own accord, for many of the text inputs.  I would make an array of “valid entries”, often something like:

valid = [“yes”, “y”, “no”, “n”]

while answer not in valid:

    answer = inout(“Yes or no?: ).lower()

Which honestly is probably still plenty valid for small choice selections.

I was a bit disappointed that the end result didn’t even make a token attempt to encode the output data, storing passwords in plain text is a really really really really really really bad idea.  Oh well, future project.  

All in all, this is definitely the most complex project so far, and it’s possible the most complex single project I’ve done ever code wise.  I’ve done some elaborate PHP/HTML/CSS web stuff, but those were always more, ongoing projects with sub-projects tacked on to them.

Day 31 – Flashcard App

Another one that I definitely will use once I make it work in a stand alone fashion in the future.  This app was quite a bit simpler than the Password Manager but I’ve found that so far, at the end of each section, there has been a complex capstone project, then a few simpler ones.  The bigger purpose of this project was learning to manipulate JSON and CSV data better, in addition to making a simple GUI.  

This app reads some data in, in this case, language words, but it could be any set of data.  Then it shows the foreign word, for three seconds, before showing the English word.  If you guessed it correctly, you hit the green box, it not, hit the X.  It’s kind of an honor system thing, but if you are trying to learn a language, why would you cheat yourself like that.

It also removes “learned words” from the pool of possible cards, permanently.  Well, or until you reset it by deleting the save files.

I’ve written a few times about my goal to learn some languages besides English by 2030.  I’ve gotten alright at Spanish, and I’ve been working on Norwegian.  The class provided a word bank for French but also explained how to make your own word bank, which I did.  Making a new word bank was not required, but I know absolutely zero French, and it was hard to tell if the cards were accurate.  So I built a Norwegian Word Bank, and will definitely build a Spanish one.  The only problem is, I went with the “Top 10,000 words”, and you only really need the “Top 2-3000 words” to start grasping the language.  So my custom word bank helped, but not as much as it could have, because I started getting some really lesser used words.  It’s easy enough to trim it off though, I can just open the data file and delete the bottom 9,000 lines.

And so this wraps up the “Intermediate” lessons.  The next session is “Intermediate+”.  Based on the daily topic headers, it looks like this next section is going to delve way deeper into using APIs to gather and manipulate web services, which should be a lot of fun.  I’m definitely learning more and I plan to revisit some of my own personal projects to make them much better once I’ve finished with the course.  A lot of my old data manipulation involved lists and a lot of if/else statements inside for loops.  Which is messy, and probably slow.  A lot of the new tools I’ve learned will really help.

100 Days of Python, Projects 15-22 #100DaysofCode

So, the first set of projects for #100DaysOfCode were all fairly simple.  Basic Text based programs that run in a terminal and run through simple loops.  The Intermediate Section starting on day 15 of the course is were things started to get quite a bit more interesting, though the basic code isn’t really all that complex yet.

There are two main topics covered during the Intermediate portion of the course, creating GUI interfaces with Turtle Graphics, and some introductory data analysis.  These two topics don’t particularly overlap, but both seem to be the primary focus here.  I’m rather enjoying the use of graphics over just terminal applications myself, though int he long run, handling CSV and other data feeds will probably be a lot more useful.

In the interest of brevity, I’m going to split the intermediate section up into a couple of posts.  I expect this to become more common as the projects become more complex and frankly, by the end, each project may even get it’s own post.

Anyway, on with the projects…  As before, all of the code is stored in this GitHub Repository.

Project 15 – The Coffee Machine Project

The actual focus for Day 15 was to set up PyCharm, and get away from using Replit for the code projects.  I’d already been using VS Code half the time anyway, but I switched over to PyCharm on this day.  A few reasons, one, it’s what the course is using, so it’s easier to follow if needed, two, it was easier to import libraries into projects than with VS Code.

The Coffee Machine project is a simple project that takes an order for a coffee, takes some coins, then spits out change and a coffee.  It doesn’t ACTUALLY do any of this physically, but if it did, that would be super impressive, manifesting physical objects with a laptop.

Project 16 – The Coffee Machine Project

Nope, you aren’t reading that wrong, the next day was the same project.  The difference was, Day 16 was also an introduction to Object Oriented Programming.  So for this project the students get some files with pre-made functions in them, and we build the same Coffee machine using these functions in an Object Oriented Programming way.

This was the first time I’ve actually learned something new in this course.  All of my code before has essentially just been “one file”.  The concept of breaking things into files and classes that do specific tasks is pretty neat, and I’ve gotten pretty good at it.  That said, I can also see where it’s still a good idea fo make a judgement of if something should be it’s own class or just be part of the base program.

There was also a brief introduction to using Pretty Tables to display data and Turtle Graphics, though the final program didn’t use any graphics.

Project 17 – Quiz Game Project

Another Object Oriented project, though instead, we make everything this time.  The questions were provided, but they are set up in a way that it’s simple to replace them with new questions from Open Trivia.  This also really helps push how useful it can be to break a program apart like this, across files.  The questions aren’t a class, they are just a dictionary you import, but they can easily be quickly replaced and the same code will run on any set of questions.

Project 18 – Hirst Painting Project

This one was a more in depth and proper look at Turtle Graphics.  Turtle is Python’s built in method for creating graphics and windows.  I’m sure there are probably others out there, but this works pretty well for a simple interface.  

The first practice was making the Turtle do some things, one of which was a pretty neat Spirograph drawer, which I may revisit in the future to make it useful.  Maybe have a pop up for how large or how many spirals to make.

The Hirst Painting project was inspired by the artist Damien Hirst, who apparently once sold some paintings consisting only of regularly placed dots.  The program takes an input image, extracts a color pallet from it, then creates a similar dot based image.  The instructor used a Hirst painting as the input, I opted to use the cover of the CHVRCHES album Every Open Eye.  It’s a pretty neat result.  It’s another that could be made more robust by prompting for an input image, how many colors to extract and how many dots to draw.

Project 19 – Turtle Racing and Etch-A-Sketch

This day was essentially two projects.  One was a simple Etch-A-Sketch style drawing program, that served as an introduction to actually controlling the Turtle with keyboard inputs.  

The second I particularly enjoyed, though it wasn’t a very complex game.  The second was Turtle Racing.  The purpose was to demonstrate how you can reuse classes to manage several variables of the same class type.  It spawns 6 turtles, you guess which will win, then they race across the screen to the finish line.

Now why is this exciting?  Waaaaaay back in the year 2000, I had a semester to kill between Community College and University, so I took a Computer Science 101 course where I learned some C++ programming.  One of the projects for that class, was a Horse Racing game, that is essentially the same concept.   The Horse Racing Game can be found here, and there is even an exe that lets you play it.  https://github.com/RamenJunkie/C_and_CPP_Code_Snippets/tree/main/CPP%20Code/Horse%20Race

Project 20 and 21 – Snake Game

The first multi day project of the course, well, ok, the Coffee Machine was SORT OF Multi Day, but not really.  This project recreates the classic game Snake.  I’m sure it existed before, but it was popularized by being included on Nokia Phones back in the 90s and early 2000s.

You control the snake, eating food to get longer, and avoiding running into the wall or yourself.  I am pretty happy with the result other than the controls feel like they could be a bit more responsive.

One thing I am proud of with the Snake Game though, I noticed in the instructor’s examples, she was constantly missing the food just barely. In her code, the food just randomly spawns. I changed the code a bit so the food always spawns on the same grid the Snake runs on, which makes it way more reliable to pick up. I also added a border, but I’m less satisfied with that because it seems Python renders things a little funny and slightly off center. I suspect that Python Turtle things are drawn from one corner of the coordinates, and not centered on the coordinates.  

Project 22 – Pong

I’m rather proud of this one, so we’ll cap this round of write ups off with it.  the Day 22 project was to recreate Pong, often cited as the “first video game”.  It’s simple, two players each have a paddle on opposite sides of the screen, the ball bounces back and forth, missing the ball grants score to the other player.

This project combines quite a few things leading up to this point from creating custom classes to taking inputs, drawing graphics. Ok, so technically the Snake Game did all of this. My real win here is doing it, entirely on my own, before watching ANY of the class videos for the day. (Ok, I may have watched the intro just to get the scope of the project, sometimes the instructor throws some curve-ball concepts in.)

Now, don’t get me wrong, in most cases, I do the coding on my own, based on what’s presented, but I usually follow along with the class and do each bit as it’s asked.  I also will occasionally “correct” my code to better align with what’s presented, not because I think my methods are wrong, but more because the instructor may later introduce a concept that needs the code to be set up a particular way.

For Pong, I did it all.  I am plenty familiar with how Pong works, I viewed the 2 minute “What we are making” first video to get the scope, then went off on my own.  And it all worked out.  Getting the ball bounce just right was the trickiest part, mostly because I started seriously over thinking the physics of it.

It really helps that I already know how to code, and have a “programmer mindset” on how to step through things.

  • Make a Paddle Class
  • Spawn a Paddle
  • Make the Paddle Move
  • Keep the Paddle within the screen constraints.
  • Spawn a second Paddle
  • Make the second paddle move with different keys
  • Draw the net
  • Make a ball
  • Make the ball move
  • Make the ball bounce withing the screen
  • Make the score board
  • Make the Scoreboard increment when the ball bounces off of left and right
  • Make the ball re-spawn when it hits left or right instead of bounce
  • Make the ball bounce off of a paddle

That’s pretty much it, the steps to making Pong.   Most of them are super easy.  I even added an additional class that would draw a “net” across the center of the screen that wasn’t required (not that anyone is grading this code).

The point was, I did it all, then watched the videos, just to see if there was any better way to do some of the things I had done.  Felt like a pretty cool accomplishment.