I have a growing list of links to articles that catch my eye, things I’d like to post about (for whatever reason). But there’s a tension between posts based on lists of links or draft posts or idea files versus posts based on what I’m currently thinking about.
I seem to feel the latter isn’t enough, that I need a reserve for “lean times” — which never happen. More and more, I post when something strikes me as worth the effort. The “idea pile” seems almost like homework.
Anyway, here are some things that recently caught my eye.
I’ll start with an article I loved because it agrees with a pet peeve of mine: The lust to colonize Mars.
The nicest way I can put it is that, in my opinion, the idea of colonizing Mars (let alone that it’ll actually happen) is foolish in the extreme.
On the separate question of whether it will ever happen, my Magic 8 Ball says: “Very Dubious!”
The article points out many of the problems: an atmosphere 100 times thinner than Earth’s, essentially a vacuum; what atmosphere does exist is CO2; average temps of -81 F and lows of -128 F; and a gravity only 0.375 times that of Earth’s.
So, at the least, colonies need to deal with the long-term health effects of low gravity, plus habitats require shielding from solar radiation normally blocked by either a thick atmosphere or a magnetic field.
Living on Mars would be very much like living in prison. As the article says:
As Friedman pointed out earlier, we don’t see colonists living in Antarctica or under the sea, so why should we expect troves of people to want to live in a place that’s considerably more unpleasant? It seems a poor alternative to living on Earth, and certainly a major step down in terms of quality of life. A strong case could even be made that, for prospective families hoping to spawn future generations of Martian colonists, it’s borderline cruelty.
Exactly. The idea of colonizing Mars seems romantic, thanks to science fiction, but the cold hard reality is something else entirely.
§ § §
In the “Oh, Gosh!” category, this article (by the great Phil Plait) about seriously high-energy gamma ray photons from the Crab Nebula:
Plait says anything I could say far better than I could, so read the article. I will just mention that “gamma rays” is the last name we have for photon energy. So gamma rays can be any photons from 100,000 eV on up (in the lower range they’re also considered hard X-rays).
Two bits especially impressed me:
The highest energy gamma ray they detected had an energy of about 400 TeV. It would take about two hundred trillion visible-light photons to equal that much energy.
And my favorite:
A common housefly has a mass of about 10 milligrams (or 1/100th of a gram). A typical flying velocity for one is about one meter per second. That gives it a kinetic energy of about 50 ergs. Doing the conversion, that means that one of those energetic photons from the Crab Nebula has an energy more than 10 times that of a housefly in flight.
Wow. Just wow.
Plait points out that, “if it were to hit you and you absorbed that energy in your skin you could actually feel it. From a single photon.”
§ § §
Turning to the field of AI, this article about how trained networks produce results we can’t understand (but which are apparently superior to anything we’ve pulled off):
Researchers used “8,000 different simulations from one of the highest-accuracy models available” as training input to a new system, called the Deep Density Displacement Model (D3M).
After training D3M, the researchers ran simulations of a box-shaped universe 600 million light-years across and compared the results to those of the slow and fast models. Whereas the slow-but-accurate approach took hundreds of hours of computation time per simulation and the existing fast method took a couple of minutes, D3M could complete a simulation in just 30 milliseconds.
But they don’t know how it works.
I can’t help but wonder if the network didn’t just encode all those simulations into a holographic phase space — essentially a meta-model — and “running a simulation” amounts in some sense to spooling off a recording.
But in this case, it’s a spool based on input parameters that steer it through that phase space. It isn’t so much calculating the simulation so much as playing it back.
If true, this suggests a lack of creative ability. These networks remain little more than very sophisticated search engines. (Of course, some say that’s all we are!)
§ § §
Speaking of neural net frailties, they appear pretty easy to fool.
The thousands of images ultimately included in the database all failed to correctly classify an object in an image for a number of reasons, none being an intentional malicious attack. The neural nets fucked up due to weather, variations in the framing of a photo, an object being partially covered, leaning too much on texture or color in a photo, among other reasons. The researchers also found that the classifiers can overgeneralize, over-extrapolate, and incorrectly include tangential categories.
(I suppose it’s dreadfully old-fashioned of me, but I still find it weird that any serious publication feels free to use the term “fucked” in its writing. I’m not sure how I feel about the cross-over between blogging and supposedly serious journalism. But whatever.)
It goes on to say:
That’s why the neural network classified a candle as a jack-o-lantern with 99.94 percent confidence, even though there were no carved pumpkins in the image. It’s why it classified a dragonfly as a banana, in what the researchers guess is because there was a shovel nearby that was yellow. It’s also why, when the framing of an alligator swimming was slightly altered, the neural network classified it as a cliff, lynx, and a fox squirrel. And that’s also why the classifier overgeneralized tricycles to bicycles and circles, and digital clocks to keyboards and calculators.
So these things have a ways to go before they’re anywhere near trustworthy.
And it seems like we ought to know more about how they do what they do.
§ § §
On the lighter side, I completely agree with every word of:
The author makes what I think are some key points. In particular, it’s not about sound quality. It was never really about sound quality:
I think the real reason for vinyl’s return goes much deeper than questions of sound quality. As media analyst Marshall McLuhan famously wrote, “The medium is the message.” In other words, “the form of a medium embeds itself in any message it would transmit or convey, creating a symbiotic relationship by which the medium influences how the message is perceived.” Nowhere does this hold truer than in the world of recorded sound.
Anyone who understands McLuhan is probably okay with me. (That scene in Annie Hall is one of my favorite cinema scenes. Oh, if only life worked that way.)
The author goes on to point out there is both a ritual and a physicality associated with vinyl that doesn’t exist in other music forms.
There’s a reality to analog music that just makes it cooler.
§ § §
Well, that’s five less links lurking in my list. It’s always possible I’ll end up posting about one of these, but right now it seems unlikely.
I’ve got a bunch more to unload next time. These are just a start.
Stay newsworthy, my friends!