Link Bot is a mindless Drone that does automated tasks for Ramen Junkie on Blogging Intensifies. It is programmed to feel happy about doing these meaningless tasks, and while this may seem cruel, Link Bot is not able to tell because it’s programming does not allow it to feel sad, and thus it is not capable to realizing it is being treated cruely.
Link Bot is a mindless Drone that does automated tasks for Ramen Junkie on Blogging Intensifies. It is programmed to feel happy about doing these meaningless tasks, and while this may seem cruel, Link Bot is not able to tell because it’s programming does not allow it to feel sad, and thus it is not capable to realizing it is being treated cruely.
Well, I made it farther than my last “in real time attempt” in 2020 by 3 starts. I may check in one the puzzles each day, but my experience is, they only get more complex as time goes on, so I doubt I’ll be completing any more of them. Each day is starting to take a lot more time to solve out, the solutions are getting a lot more finicky to produce. We’ve also reached the point where the puzzle inputs also feel ridiculously obtuse. Like the Day 15 puzzle, where every number was in the millions, basically, for the only purpose of making everything slow without some sort of magic reduction math. Though skimming through other’s solutions, there didn’t seem to really BE any “magic reduction” option there. \
Which is fine. It’s not supposed to be easy. I don’t expect it to be easy.
But I have long ago accepted that things I’m doing for relaxation or enjoyment, should at least be relaxing and enjoyable. And These puzzles have reached a point where the amount of enjoyment and relaxation I get from them is no longer worthwhile.
So I’m choosing to end this year’s journey here.
Maybe I’ll go back and finish them some day, but more at my own leisure. I mean, I had started doing the old 2015 puzzles in the week leading up to this year’s event. I was never doing this in any attempt to get on the leader boards or anything anyway, hell I didn’t even start most day’s puzzles until the day was half over or later.
For what it’s worth, i did make a strong attempt on Day 15 but I just could not get it to output the correct answer, and I’m not real sure why. I couldn’t even get the sample input to work out, I was always one off. It’s possible, and likely, I was counting the space where the beacon existed, but my actual input data was off by a little over 1 million, and there are not 1 million beacons on the board. Plus it was 1 million under, where my sample input solution was 1 over.
I’m not even attempting today’s, for Day 16. I can see the logic needed, but the nuance to accomplish it will just take me too long to code out and like I said above, enjoyment and relaxation is the point. I don’t need to add hours of stress to my day.
Josh Miller aka “Ramen Junkie”. I write about my various hobbies here. Mostly coding, photography, and music. Sometimes I just write about life in general. I also post sometimes about toy collecting and video games at Lameazoid.com.
I’ve had a bit of a pause on this series, for a few reasons, mostly just, the process is slow. One of the interesting things you can do with Stable Diffusion, is train your own models. The thing is, training models takes time. A LOT of time. I have only trained Embeddings, I believe Hyperwork Training takes even longer, and I am still not entirely sure what the difference is, despite researching it a few times. The results I’ve gotten have been hit and miss, and for reasons I have not entirely pinned down, it seems to have gotten worse over time.
So how does it work. Basically, at least in the Automatic1111 version of SD I’ve been using, you create the Embedding file, along with the prompt you want to use to trigger it. My Advice on this, make the trigger, something unique. If I train a person, like a celebrity, for example, I will add an underscore between first and last name, and use the full name, so it will differentiate from any built in models for that person. I am not famous, but as an example, “Ramen Junkie” would become Ramen_Junkie” for example. So when I want to trigger it, I can do something like, “A photograph of ramen_junkie in a forest”.
This method definitely works.
Some examples, If I use Stable Diffusion with “Lauren Mayberry” from CHVRCHES, I get an image like this:
Which certainly mostly looks like her, but it’s clearly based on some older images. After training a model for “Lauren_Mayberry” using some more recent photos from the current era, I can get images like this:
Which are a much better match, especially for how she looks now.
Anyway, after setting up the prompt and embedding file name, you preprocess the images, which mostly involves pointing the system at a folder of images so it can crop them to 512×512. There are some options here, I usually let it do reversed images, so it gets more data, and for people, I will use the auto focal point deal, where it, theoretically picks out faces.
The last step is the actual training. Select the created Embedding from the drop down, enter the folder of the preprocessed images, then hit “Train Embedding”. This takes a LONG time. In my experience, on my pretty beefy machine, it takes 11-12 hours. I almost always leave this to run overnight, because it also puts a pretty heavy load on everything, so anything except basic web browsing or writing is going to not work at all. Definitely not any sort of gaming.
The main drawback of the long time is, it often fails. I’m not entirely sure WHY it sometimes fails. Sometimes you get bad results, which I can understand, but the failing just leaves cryptic error messages, usually involving CUDA. I also believe sometimes it crashes the PC, because occasionally I check on it in the morning and the PC has clearly rebooted (no open windows, Steam/etc all start up). I generally keep my PC up to date, so it’s not a Windows Update problem. Sometimes if the same data set fails repeatedly I’ll go through and delete some of the less ideal images, in case there is some issue with the data set.
Speaking of Data Sets, the number needed is not super clear either. I’ve done a few with a dozen images, I’ve done some with 500 images. Just to see what kind of different results I can get. The larger data sets actually seemed to produce worse results. I suspect that larger data sets don’t give it enough to pull out the nuances of the lesser number of images. Also, at least one large data set I tried was just a series of still frames from a video, and the results there were ridiculously cursed. My point is mostly, a good middle ground seems to be 20-30 base images, with similar but not identical styles. For people, clear faces helps a lot.
I have tried to do training on specific styles but I have not had any luck on that one yet. I’m thinking maybe my data sets on styles are not “regular” enough or something. I may still experiment a bit with this, I’ve only tried a few data sets. For example I tried to train one on the G1 Transformers Cartoon, Floro Dery art style, but it just kept producing random 3D style robots.
For people, I also trained it on myself, which I may use a bit more for examples in a future post. It came out mostly OK, other than AI Art me is a lot skinnier and a lot better dressed. I have no idea, but every result is wearing a suit. I did not ask for a suit and I don’t think any of the training images were wearing a suit. Also, you might look at them and think “the hair is all over”, but I am real bad about fluctuating from “Recent hair cut” to “desperately needs a haircut” constantly. The hair is almost the MOST accurate part.
Anyway, a few more samples of Stable Diffusion Images built using training data.
Josh Miller aka “Ramen Junkie”. I write about my various hobbies here. Mostly coding, photography, and music. Sometimes I just write about life in general. I also post sometimes about toy collecting and video games at Lameazoid.com.
Link Bot is a mindless Drone that does automated tasks for Ramen Junkie on Blogging Intensifies. It is programmed to feel happy about doing these meaningless tasks, and while this may seem cruel, Link Bot is not able to tell because it’s programming does not allow it to feel sad, and thus it is not capable to realizing it is being treated cruely.