Mastering The Game: What Video Games Can Teach Us About Success In Life
Jon Harrison
Published: 2015 | Pages:294
My Rating: 3/5
Description: Mastering The Game: What Video Games Can Teach Us About Success In Life takes a look at how the same habits and principles that lead to success when playing video games can be applied to personal and business success. Principles are ideas that are truly timeless, and remain true independent of context, culture or time period. So what are the principles embedded in the most popular video games? Surprisingly, the list strongly resembles the most in demand traits for the workplace. • Adaptability & Managing Change • Personal Accountability • Innovation • Communication & Listening • Teambuilding & Collaboration • Knowledge Sharing • Persistence & Grit Mastering The Game provides analogies, examples, and lessons for connecting the dots between how gamers play and how successful professionals work. Are you ready to take your career to the next level?
Scott Pilgrim, Volume 3: Scott Pilgrim & The Infinite Sadness
Bryan Lee O’Malley
Published: 2006 | Pages:192
My Rating: 4/5
Description:
Scott Pilgrim, Volume 4: Scott Pilgrim Gets It Together
Bryan Lee O’Malley
Published: 2007 | Pages:216
My Rating: 4/5
Description:
Scott Pilgrim, Volume 5: Scott Pilgrim vs. the Universe
Bryan Lee O’Malley
Published: 2009 | Pages:184
My Rating: 4/5
Description:
Scott Pilgrim, Volume 6: Scott Pilgrim’s Finest Hour
Bryan Lee O’Malley
Published: 2010 | Pages:245
My Rating: 5/5
Description:
Scott Pilgrim’s Precious Little Life (Scott Pilgrim, #1)
Josh Miller aka “Ramen Junkie”. I write about my various hobbies here. Mostly coding, photography, and music. Sometimes I just write about life in general. I also post sometimes about toy collecting and video games at Lameazoid.com.
It’s that time again, when I ramble on about what I’ve been listening to this year. I don’t use Spotify so I don’t have a “cool kids” wrap up to share. Instead you get this! A blog post! And some little 5×5 charts made from my Last.fm. I kind of prefer starting on Artists over Albums, so without further ado, the top 25 for 2022. Which unsurprisingly, is relatively similar to my previous years.
In a surprise to literally no one, including myself, my top 3 are pretty much the same as they have been for a while. Aurora, CHVRCHES, and Sigrid. Actually, I am a little surprised Sigrid is so high, I didn’t listen tot he new album a ton, or much Sigrid that I remember actually. Aurora and CHVRCHES, not so much, I listen to both a lot. Heck, I saw Aurora live this year.
Beyond that is a bit of new-ish stuff. I’m still been digging on Radiohead’s Kid A Mnesia re-release, and I picked up a copy of OK Computer to add to that mix. The Weeknd’s Dawn FM has become a pretty regular listen as well. It’s an album I didn’t expect to enjoy as much as I do. Orla Gartland and Dodie slot in at numbers 6 and 7. I almost went to see Dodie (and Orla) earlier this year but my wife had a Doctor’s thing come up and COVID was starting to ramp up again, so I opted to skip it.
Next on the list is Magdalena Bay, a newcomer to the list which a lot of the folks at the CHVRCHES discord recommended. Another with this honor is Wolf Alice down in the number 17 slot. I actually kind of prefer Wolf Alice to Mag Bay, but I picked up their album later, so they got less time to be listened to. Wolf Alice is a bit more, guitars and rock, while Mag Bay is more synth and electronic. Also in this list of “Recommended by CHV People,” is Purity Ring at number 20.
Quite a few regulars on the list for a bit, though I want to comment on how much Raffaella has manages to stay in these lists. I’m sure I mentioned it before, several times, but I had never hear of her back in 2019 before going to see Sigrid, and I’ve been listening to her music still, years later. She hasn’t even really put out a lot of new stuff either, a couple of singles and a small sized EP.
I think Berlinist is new to the top 25, but this is entirely from listening to the Gris Video Game soundtrack. It’s a good ‘relaxing music” album.
There are a few missing that I’m actually surprised are missing. Well, one anyway, which is Enya. Not a new artist by any means, but I ended up with her entire discography, minus one, I think, at an estate sale, and I listened to quite a few of her albums recently. It feels like it should have been enough to put her up there on the list. It’s also really interesting because, Enya definitely reminds me of the same sort of, exotic, vocalizing, sort of style of Aurora.
Anyway, next up is the albums list, but a lot will have been covered by the Artists wrap up so I’ll just stick to anything interesting.
And it’s more or less an expansion of the Artists lists. Most interesting, though not surprising is that every album from Aurora and CHVRCHES appears in this list.
I also noticed I forgot to mention the Cranberries. Like Enya, I picked up a few Cranberries albums and I’ve been enjoying them. It’s also funny because, Enya “feels” like an “Earlier Aurora” in a lot of ways, and The Cranberries, “feels” like an “Earlier CHVRCHES” in a lot of ways. I could actually expand on this a bit with Sigrid too, because Alanis Morissette, “feels” like an “Earlier Sigrid.”
I guess my point is, the more things change,t he more they stay the same, and my overall musical tastes is about the same as it always has been, it’s just added the latest version of the same shtick.
Josh Miller aka “Ramen Junkie”. I write about my various hobbies here. Mostly coding, photography, and music. Sometimes I just write about life in general. I also post sometimes about toy collecting and video games at Lameazoid.com.
I actually wrote up a private journal entry about the whole Twitter mess, but I’m really really trying not to fuel that drama, so I’m refraining from actually posting anything about it. It’s gotten me thinking some about social media and following people online in general though. I’ve been online for a very long time and I’ve used computers almost my entire 43 years of life, and that is not an exaggeration. I am, and will forever be connected to technology as a core piece of who I am. And as such, I also flock to social websites. But the funny part is how each one sort of manages to slot into it’s own little place in my world. And how irritating it is when they try to change things and be something I don’t want them to be.
Take say, Instagram. I resisted Instagram for a while, and even once I started using it, I honestly have never been super committed to it. But I do like seeing other people’s photos. I have two accounts, one that is almost exclusively toys, and one that is “everything else”. Which is usually bands and musicians I like, family members, cats, food. In that order. Though lately, they keep trying to turn themselves into TikTok, which I hate, because I don’t want dumb video clips, I want photos.
Instagram of course is part of Facebook, though it serves it’s own purpose. Most of my actual contacts on Facebook are people I actually know. Family, all those people from High School I added when FB first launched, and friends, both from real life and online. Though I want to add that in this case, “Online Friends”, for the most part is, “Actual friends.” People I have been online friends for longer than I’ve been friends with anyone else. People I’ve been connected to cross platforms and in quite a few cases, people I have met with face to face at least once. I also use Facebook some for groups, but it’s pretty much limited to a couple of toy based groups and groups related to musicians I like. I tried using Facebook for news, but the comment sections are always cancerous idiocy so I had to drop all of the news sources I was following.
Beyond that, things get a bit more nebulous.
Take, for example, Reddit. I use Reddit a lot, probably more than is healthy, but i don’t have any friends on Reddit. I basically do not ever look at user names. I do follow a couple of accounts, but it’s mostly just people I actually know, and mostly for the sake of, “This is an easy way to remember who they are on Reddit”. I don’t actually look at their Reddit feeds, because I follow them all through other platforms where I will hear about things they want to say in a much more efficient manner. I do follow and check a shitload of subreddits and regularly browse posts on /r/all. My “Reddit Recap” for 2022 says i was in the “Top 1% of all Redditors”. I’m not sure I’m proud of that one. It’s probably the one place that’s great for getting good information from actual people on a wide variety of topics.
Then there are places like Twitter, which I’ve replaced the functionality of with Mastodon. These places, basically boil down to, “If your profile information meets any one of a dozen or so criteria, I will follow you”. Post about Video Games, Toys, Nerdy Tech shit, almost instant follow. Post snarky one liners about life that I emphasize with, that’s a follow. Post memes about cats or something frequently, but not so frequently it pollutes my feed, you bet, I’ll follow that. I treat Micro Blog platforms more like…. RSS for people’s shit takes and hot takes.
Which brings me to another way I follow people, RSS. I love RSS. RSS is so perfect for following and I am still fucking salty about Google Reader being closed and will always be because it basically killed RSS to the world. As of now, I follow around 500 blogs and news sites. I have a ton more bookmarks, waiting in a folder called, “Todo -> Add to RSS”. And i regularly go through this folder. The criteria for following a blog on RSS are similar to the Micro Blog criteria, but probably in a more broad sense.
I think in the end, I just like hearing people’s stories and random thoughts. Even if I don’t always give feed back with a comment or a like or whatever. I want to know what people think. Especially things that seem completely banal and pointless.
Josh Miller aka “Ramen Junkie”. I write about my various hobbies here. Mostly coding, photography, and music. Sometimes I just write about life in general. I also post sometimes about toy collecting and video games at Lameazoid.com.
Well, I made it farther than my last “in real time attempt” in 2020 by 3 starts. I may check in one the puzzles each day, but my experience is, they only get more complex as time goes on, so I doubt I’ll be completing any more of them. Each day is starting to take a lot more time to solve out, the solutions are getting a lot more finicky to produce. We’ve also reached the point where the puzzle inputs also feel ridiculously obtuse. Like the Day 15 puzzle, where every number was in the millions, basically, for the only purpose of making everything slow without some sort of magic reduction math. Though skimming through other’s solutions, there didn’t seem to really BE any “magic reduction” option there. \
Which is fine. It’s not supposed to be easy. I don’t expect it to be easy.
But I have long ago accepted that things I’m doing for relaxation or enjoyment, should at least be relaxing and enjoyable. And These puzzles have reached a point where the amount of enjoyment and relaxation I get from them is no longer worthwhile.
So I’m choosing to end this year’s journey here.
Maybe I’ll go back and finish them some day, but more at my own leisure. I mean, I had started doing the old 2015 puzzles in the week leading up to this year’s event. I was never doing this in any attempt to get on the leader boards or anything anyway, hell I didn’t even start most day’s puzzles until the day was half over or later.
For what it’s worth, i did make a strong attempt on Day 15 but I just could not get it to output the correct answer, and I’m not real sure why. I couldn’t even get the sample input to work out, I was always one off. It’s possible, and likely, I was counting the space where the beacon existed, but my actual input data was off by a little over 1 million, and there are not 1 million beacons on the board. Plus it was 1 million under, where my sample input solution was 1 over.
I’m not even attempting today’s, for Day 16. I can see the logic needed, but the nuance to accomplish it will just take me too long to code out and like I said above, enjoyment and relaxation is the point. I don’t need to add hours of stress to my day.
Josh Miller aka “Ramen Junkie”. I write about my various hobbies here. Mostly coding, photography, and music. Sometimes I just write about life in general. I also post sometimes about toy collecting and video games at Lameazoid.com.
I’ve had a bit of a pause on this series, for a few reasons, mostly just, the process is slow. One of the interesting things you can do with Stable Diffusion, is train your own models. The thing is, training models takes time. A LOT of time. I have only trained Embeddings, I believe Hyperwork Training takes even longer, and I am still not entirely sure what the difference is, despite researching it a few times. The results I’ve gotten have been hit and miss, and for reasons I have not entirely pinned down, it seems to have gotten worse over time.
So how does it work. Basically, at least in the Automatic1111 version of SD I’ve been using, you create the Embedding file, along with the prompt you want to use to trigger it. My Advice on this, make the trigger, something unique. If I train a person, like a celebrity, for example, I will add an underscore between first and last name, and use the full name, so it will differentiate from any built in models for that person. I am not famous, but as an example, “Ramen Junkie” would become Ramen_Junkie” for example. So when I want to trigger it, I can do something like, “A photograph of ramen_junkie in a forest”.
This method definitely works.
Some examples, If I use Stable Diffusion with “Lauren Mayberry” from CHVRCHES, I get an image like this:
Which certainly mostly looks like her, but it’s clearly based on some older images. After training a model for “Lauren_Mayberry” using some more recent photos from the current era, I can get images like this:
Which are a much better match, especially for how she looks now.
Anyway, after setting up the prompt and embedding file name, you preprocess the images, which mostly involves pointing the system at a folder of images so it can crop them to 512×512. There are some options here, I usually let it do reversed images, so it gets more data, and for people, I will use the auto focal point deal, where it, theoretically picks out faces.
The last step is the actual training. Select the created Embedding from the drop down, enter the folder of the preprocessed images, then hit “Train Embedding”. This takes a LONG time. In my experience, on my pretty beefy machine, it takes 11-12 hours. I almost always leave this to run overnight, because it also puts a pretty heavy load on everything, so anything except basic web browsing or writing is going to not work at all. Definitely not any sort of gaming.
The main drawback of the long time is, it often fails. I’m not entirely sure WHY it sometimes fails. Sometimes you get bad results, which I can understand, but the failing just leaves cryptic error messages, usually involving CUDA. I also believe sometimes it crashes the PC, because occasionally I check on it in the morning and the PC has clearly rebooted (no open windows, Steam/etc all start up). I generally keep my PC up to date, so it’s not a Windows Update problem. Sometimes if the same data set fails repeatedly I’ll go through and delete some of the less ideal images, in case there is some issue with the data set.
Speaking of Data Sets, the number needed is not super clear either. I’ve done a few with a dozen images, I’ve done some with 500 images. Just to see what kind of different results I can get. The larger data sets actually seemed to produce worse results. I suspect that larger data sets don’t give it enough to pull out the nuances of the lesser number of images. Also, at least one large data set I tried was just a series of still frames from a video, and the results there were ridiculously cursed. My point is mostly, a good middle ground seems to be 20-30 base images, with similar but not identical styles. For people, clear faces helps a lot.
I have tried to do training on specific styles but I have not had any luck on that one yet. I’m thinking maybe my data sets on styles are not “regular” enough or something. I may still experiment a bit with this, I’ve only tried a few data sets. For example I tried to train one on the G1 Transformers Cartoon, Floro Dery art style, but it just kept producing random 3D style robots.
For people, I also trained it on myself, which I may use a bit more for examples in a future post. It came out mostly OK, other than AI Art me is a lot skinnier and a lot better dressed. I have no idea, but every result is wearing a suit. I did not ask for a suit and I don’t think any of the training images were wearing a suit. Also, you might look at them and think “the hair is all over”, but I am real bad about fluctuating from “Recent hair cut” to “desperately needs a haircut” constantly. The hair is almost the MOST accurate part.
Anyway, a few more samples of Stable Diffusion Images built using training data.
Josh Miller aka “Ramen Junkie”. I write about my various hobbies here. Mostly coding, photography, and music. Sometimes I just write about life in general. I also post sometimes about toy collecting and video games at Lameazoid.com.