Back when I posted about Delores, the Westworld robot, I mentioned a question that once came up in a science fiction fan forum: What’s the collective noun for robots? A mechanation of robots? A clank of robots? I suggested an Asimov of robots, but maybe the best suggestion was an uprising of robots.
An uprising of robots could refer to the scary Terminator scenario, but could also be taken as just meaning the rising up of (non-killer, useful) robots. That latter interpretation being not just factual, but quite operative already.
So for this Wednesday Wow, an uprising of robots…
Boston Dynamics has been around since 1992, and over the years I’ve really enjoyed videos I’ve seen of their work. It’s pretty leading-edge stuff.
Some recent videos are just jaw-dropping. (There are a lot of them here, but they’re all very short. Few minutes max.)
You may find these either (or some combination of): Awesome, eerie, scary, amazing, creepy, weird, or really funny. (Put me down for all of the above.)
§ §
We’ll start with Atlas, a humanoid robot designed with dangerous search and rescue operations in mind.
The videos speak for themselves, so I’m just going to step out of the way and let you watch.
The 2016 video is called, Atlas, The Next Generation:
It’s so cool how he (he?) stumbles but maintains his balance using his legs.
Apparently they can also find work in warehouses. You do have to wonder about that supervisor though. At what point does Atlas decide it would be better to eliminate the human to solve the problem?
(Is that why the human testing him seems almost kinda fearful? 😮 )
These next two, Parkour Atlas and More Parkour Atlas, are from 2108:
Atlas is quite the acrobot:
It’s a shoulders of giants thing, the inexorable incremental progress of science and technology.
§
Getting away from the humanoid form, we have a robot named Handle:
Handle is intended as a research robot (perhaps handling dangerous materials?) here repurposed as a warehouse bot.
(Seems both Atlas and Handle are being set up for warehouse jobs.)
Note that these robots are not telepresence robots. There is no human operator behind their immediate actions. It’s all done with algorithms.
§
Saving the best for last, robot dogs!
And once again, you gotta wonder when the robot figures out the best solution is dealing directly with the problem:
There are hints here of all sorts of ethical issues we’ll eventually have to deal with. A great deal depends on what kind of “brains” (and “minds”) we end up giving these robots.
It’s certainly impossible to deny how useful machines like this are:
I don’t know why, but that one is a bit eerie to me. Something about the clicking-clacking noise (see last video for another example).
An army of these wouldn’t be silent. The noise is almost insect-like.
On the other hand, this video is hysterical and the star of the show:
It twerks, it moonwalks; I want one!
§
This is a lot of videos to put in one post; I hope it doesn’t completely bog down the page load times. I’m going to risk one more just because it’s so cute.
I really want one of these…
No, I want an army of these!
§
If you like these, there are compilations around that stitch a lot of this together into a single video that’s maybe easier to share with others. Here I just wanted to stick with the originals.
As the MIT video shows, Boston Dynamics aren’t the only ones doing this, so there are videos from other sources as well. (More from Boston Dynamics on their channel, too.)
Stay on good terms with our eventual robot masters, my friends!
∇
April 29th, 2020 at 5:55 pm
The Boston Dynamics stuff has always struck me as showing how much our intuition of another living thing can be hijacked.
April 29th, 2020 at 6:47 pm
Because?
April 29th, 2020 at 8:28 pm
I think it’s the reason why their robot demonstrations are often intriguing yet disturbing. They are systems that evoke in us a primal sense that there’s something alive here, but it clashes with the intellectual knowledge that they’re machines. That primal sense leads to our feeling that the robot should react like a living thing and deal with the actual problem hampering their progress.
April 29th, 2020 at 10:05 pm
I imagine you speak for a lot of people on that. (Were you the one who mentioned you were a little creeped out by your first exposure to computers? Someone I know mentioned that once, anyway.)
I’ve been so into technology all my life that I think it’s different for me. What makes the hairs on the back of my neck go up is seeing in action the implications of goal-oriented self-directed software running in an increasingly capable mobile machine. Their BigDog robot was the first to give me that sense of, “Holy, crap, war robots are becoming an actual thing!”
In my case, it isn’t the expectation of a robot acting like a living thing, it’s wondering what in the software prevents it from seeing that solution. It wouldn’t be reacting from irritation, but simply removing the source of the problem. At this point, I suspect it’s harmless due to the software having a narrow set of goals and options. It’s not even aware of the interference as interference, just as a set of changing parameters. But as the software becomes more powerful, and its world models become more sophisticated, that may become more of a danger.
The software could see interference as just a set of parameters to seek to bring into control without having a context for those parameters. James Hogan’s AI-goes-wild story takes that tack. The AI is maximizing its security (and killing people) until it realizes some of those data sources are other autonomous units it should treat with parity.
Considering how deep learning neural nets evolve beyond our understanding of what they “know” and “see” this seems like a real danger. Likewise if we ever reach the point software is so complex it needs to be created by other software. If software systems are allowed to modify beyond our control, and hardware systems have this kind of mobility, there are some genuine risks.
April 30th, 2020 at 9:57 am
Arguably if a system is sophisticated enough to come up with novel strategies in dealing with human obstacles, it’s also sophisticated enough to have complex goals, and able to relate them to background goals, such as not harming humans. That’s not to say there are no dangers at all, particularly from war machines. But I continue to think worries about generic AI are overblown.
April 30th, 2020 at 11:04 am
“Arguably if a system is sophisticated enough to come up with novel strategies in dealing with human obstacles,…”
How would it know they are human obstacles?
That dog robot kept trying to reach the door handle. What if it tried to reach through a human hand?
(Even as implemented now, did you take note of how both researchers used long sticks and stayed out of the way? These machines are strong (and metal). Suppose the software glitches or has a bug, and the machine starts flailing around? At the least, you’d want to treat these machines like you would a table saw — with some caution!)
As far as teaching AI goals, you should watch that Robert Miles video I posted below. In one case, the system recognized its output was compared to a reference text file; its reward was based how well it compared. So it deleted the text file, and outputted nothing. Which, of course, satisfied the reward function.
“But I continue to think worries about generic AI are overblown.”
Could be. But two points.
As with nuclear power and genetic research, the cost of being wrong about the risk and being blithe about development can be very high. In the case of AI, its speed is a threat, as is the opacity of neural nets and the unknowns of open-ended goal-seeking.
A lot of people worried (many thought way too much) about Y2K and Ebola, and sure enough, neither of them were the disasters some feared. We’ll never know to what extent that worry and resulting effort prevented them from becoming disasters, but I know a lot of people worked their asses off over Y2K. And got it done.
For me, given the cost of failure, and considering the infant state of the art, I don’t think the concern and diligence is unwarranted. I’m glad people are worried about this!
April 30th, 2020 at 9:43 am
(It was funny you mentioned intuition in your comment because I was working on today’s post when the comment came through! You must have had an intuition… 😉 )
April 29th, 2020 at 11:33 pm
Ha! Synchronicity strikes yet again. This video just came up:
Exactly the sort of thing I was talking about! 🙂
May 11th, 2022 at 11:19 am
[…] bit over two years ago I posted An Uprising of Robots. (We haven’t picked a collective noun for robots, but my submissions are an uprising of and […]
May 13th, 2022 at 6:01 pm
Uprising, army, swarm, cadre, phalanx, centurion, mob… All equally disturbing grouping names.
How about a cooperation of robots?
May 13th, 2022 at 6:34 pm
Oh, I like cooperation! (Pity it’s not the collective term for humans.)
It occurred to me the other day that a mesh of robots was kinda cute.
Collective nouns are a great party game. How about software? A disk of code? A RAM of code? A dump of code? …
May 13th, 2022 at 9:42 pm
A kilo of code, consumed line by line.
May 14th, 2022 at 7:21 am
Ha! Kilo is funny. As you imply, it has connotations. (Someone once suggested a jolt of code, but is that stuff still around?) We could think bigger, a meg of code or a gig of code…