What I Use: Synergy
Last post, I talked a bit about my new Multi Monitor set up. I mentioned that I use a program called Synergy to handle using multiple machines with one keyboard and mouse. It’s essentially a virtual KVM, only without the V, since everything has it’s own Video disrt play.
It’s not a free program, but it’s not expensive, and it’s well worth it if you use multiple machines in this manner.
The general gist of it’s use, one machine acts as a server, and other machines connect to it. The server hosts the mouse and keyboard, and the configuration. Out of the box, Synergy actually works kind of crappy with a multi monitor involved such as my set up. The configuration is a simple drag and drop positioning grid, and it doesn’t care about how many monitors are on one system, it assumes one.
You can manually set up a more complex configuration pretty easily. I’d recommend doing a basic set up and making sure everything is working well before delving into the complex realm. I’ve found several tutorials online with complex formulas and jargon but the whole set up, in most cases, is a lot simpler.
Start off with your basic set up and save the configuration file. Now, save it again with some sort of appended name like “edited” or “custom”. This way you can always reload the original working configuration. Also, you can save this configuration anywhere but ultimately the program may need to reload it so I would recommend saving it somewhere handy but out of the way, like Documents or even a folder in Documents.
Now, find the file you just saved and open it in notepad. Find the section labeled “section:links”. This is the meat of how the program knows where to transition. It should look something like this:
section: links
pi:
down = Squall
Ixion:
right = Squall
Squall:
up = pi
left = Ixion
end
Notice the directions, up, down, left, right, these are the edges where transitions occur. You can alter these to make them more precise by adding (x1,x2) to each entry, where x1 is the starting percentage across the screen and x2 is the ending percentage.
If you have some complicated positioning, you can futz out some math on the percentages by using the number of pixels /the number of pixels total, but if you have a fairly simple set up like mine, it’s not hard to generalize these percentages. In my case, this becomes:
section: links
pi:
down(0,100) = Squall(33,66)
left(0,100) = Ixion(0,100)
Ixion:
down(0,100) = Squall(0,33)
right(0,100) = pi(0,100)
Squall:
up(0,33) = Ixion(0,100)
up(33,66) = pi(0,100)
end
Note, that (0,33) is the “first third” across the top of the total width (3 monitors). The other transition is (33,66) or the second third. If I had a third monitor on top, it would end up being (66,100), however since I don’t the mouse stays locked within the monitor on the right instead of transitioning anywhere.
With my original generic set up, any upward movement always went to “pi” and going off the left hand edge went to “Ixion”. In the new set up, everything behaves as expected in a seamless up, down, and across fashion.
Oh, and it works on a Raspberry Pi!
Josh Miller aka “Ramen Junkie”. I write about my various hobbies here. Mostly coding, photography, and music. Sometimes I just write about life in general. I also post sometimes about toy collecting and video games at Lameazoid.com.
Self Driving Cars
Every so often, I’ve seen the “ethical dilemma” of Self Driving cars come up for debate. Specifically, the scenario goes something like this:
A self driving car is approaching a crowd of children, it can veer off a cliff and kill the occupants, saving the children, what choice does it make? Who is responsible for the deaths?”
Its a dilemma to be sure, but it’s also completely absurd and effectively a non issue, which is an angle no one seems to really look at or realize. This specific scenario is completely absurd because, why are a bunch of children blocking a road on the side of a cliff to begin with? It can be toned down to be a bit more realistic of course, what if it’s a blind corner, maybe the children are just on a street and it’s just a crowd of people and not children. The children are just there to appeal to your emotional “Think of the children!!” need anyway. Maybe the alternative is to smash into a building at 60 mph after turning this blind corner into the crowd of people.
No wait, why was the car screwing around any corner where people may be at 60mph? That’s highway speeds, there’s a reason we have different speed limits after all, open view open areas like highways are faster because we can see farther down the road and we have more room to swerve into other lanes or the shoulder and not into buildings or random crowds of people.
Exceeding the speed limit like that is a human problem, not a robot problem.
So, maybe the car is obeying the speed limit, maybe the brakes have suddenly, inexplicably, failed, and the car simply can’t stop…
No wait, that doesn’t work either. Brakes generally don’t just “fail”. A robot car will be loaded with sensors, it will know the instant the brakes display even a little bit of an issue and probably drive off to have itself serviced. Or at the very least it will alert the driver of the problem and when it reaches a critical stage, simply refuse to start or operate until fixed. Should have taken it into the shop, that on demand last minute fix service call will probably cost you three times as much while you are late to work.
Looks like ignoring warning signs of trouble is also a human problem, not a robot problem.
So what if there simply isn’t time to react properly because it’s a “blind corner”? Maybe some idiot is hiding behind a mailbox or tree waiting to jump out in front of your self driving car. Except this is still more of a human problem than a robot problem.
All of these self driving robot cars, are all going to talk to each other. You car will know about every crowd of people in a twenty mile radius because all of the other cars will be talking to it and saying things like “Yo dawg, main street’s closed, there’s a parade of nuns and children there,” and the car will simply plan a different route.
They will even tell each other about that suicidal fool hiding behind the tree.
Maybe your car is alone, in the dark in a deserted area. First, it’s a robot, it doesn’t care about the darkness, if there isn’t some infrared scanner attached telling it there is someone hiding somewhere, it’s going to still see the obstruction. It will be able to know “How fast could a dog or a person jump out from behind that thing, how wide should I swing around it, how slow should I pass by it.”
It knows, because this is all it does.
Speaking of dogs, or possums, or deers, this also becomes a non issue. The car will be able to see everything around it, in the dark, because it can “see” better than any human. It also constantly sees everything in a 360 degree view. The self driving robot car will never get distracted rubber necking at an accident, it will never be distracted by that “hot chick” walking along the side of the street, it will never road range because some other robot car cut it off (which won’t happen anyway).
It just drives.
And it will do it exceptionally well.
And even if our crazy scenario comes true, even if a self driving car has a freak accident and kills a buss full fo children every year or really every month, it will still kill fewer people than humans kill while driving.
So feel free to waste time debating which deserves to die, the driver or the pack of people, or debate who is responsible, you may as well ask who will be responsible for cleaning up all the poop cars make when they replace the horse and buggy.
Josh Miller aka “Ramen Junkie”. I write about my various hobbies here. Mostly coding, photography, and music. Sometimes I just write about life in general. I also post sometimes about toy collecting and video games at Lameazoid.com.