It’s Friday, and I have Notes, but they’re all Science Notes, so while this post (and any others of similar ilk that may follow) is in the spirit of Friday Notes, it comes from a different direction. Science from right field, so to speak, rather than the usual oddities from left field.
These Notes were originally meant as reminders to mention some cool science things to friends over burgers and beers (or whatever). But rather than tasty morsels for the few, why not for the many? (Or at least for a few more.)
So today, Science Notes (and some reactions):
Since these notes were meant for in-person discussion, I didn’t save any links to the articles that caught my eye here. All are from New Scientist articles, and they tend to be behind a paywall, so there isn’t much point in sharing the links, anyway.
I will say that, although I generally like New Scientist, a London-based magazine, it suffers from having a lot of “may” and “could” and “might” in articles. I’m generally not interested in predictions (most of which turn out wrong). When it comes to science, I want to understand (as best as I can) what’s going on now.
That aside, New Scientist is good about avoiding the more wide-eyed press-release headlines that often plague science journalism. I’ll also say that I do enjoy many of their articles. Its pros far outweigh its cons.
Hungry Dogs
Two dog breeds, Labradors and flat-coated retrievers, may be prone to becoming overweight because they have a mutation that makes them hungrier between meals and lowers their metabolic rate.
Oh, boy, does that ever explain an aspect of my black Lab Sam’s behavior. As any Lab owner knows, ‘Labs are always hungry.’ (Noticeably so, compared to most dogs.)
So, it was genetic.
Our Long Childhood
Next is an example of scientists — lacking experimental or even observational data — wandering off into speculation based on a handful of data points. In this case, archeologists who discovered toys amid ancient ruins:
The appearance of certain toys in the archaeological record coincides with technological innovations, such as the wheel and weaving, hinting that child’s play inspired some key human inventions.
The article seems to imply that what children play with is seen and taken up by adults and, thus, child’s imagination leads to practical invention. By which logic, children invented cars, planes, guns, and so forth.
Another interpretation (read: guess) is that children’s toys as artifacts arose at the same time as other, adult-made, artifacts, such as tools. There is also that children can be inventive with their toys, but they generally don’t make them.
So too does evidence of tool-making at Stone Age sites where artefacts created by expert flint-knappers are mixed with amateurish efforts – suggesting children were trying their hand at it.
Or just amateur adults learning. One position the article takes is that children were deeply integrated into ancient life, and of that I have no doubt. In the above, the equation is simple: we need good flint-knappers, might as well train them early.
“As children grow, who they choose to learn from changes,” says Nowell.
Yes, true everywhere, and a key reason not to lock in a child’s life too early. Who they are as people fundamentally evolves until somewhere mid-twenties. One of the main points of school is to expose the young to a diverse set of topics.
In hunter-gatherer societies, children start by learning from their parents, but as they enter adolescence, they start to seek out other adults – especially those they perceive as innovative.
Again, true everywhere, I think, and counter to the earlier assertion that innovation flows from children’s toys to adult minds. To the contrary, I imagine that the adult minds conceived the toys.
Speedy Spacecraft
The Parker Solar Probe is a NASA spacecraft studying the Sun. Up close and personal!
On 27 September last year, the Parker Solar Probe roared past the Sun at a speed of 635,266 km/h – that’s 700 times faster than any airliner.
For Americans, that’s 394,736 mi/h. More importantly, it’s 0.05% the speed of light! (Which is 1,080,000,000 km/h.)
During the December 2024 fly-past, NASA’s Goddard Space Flight Center predicts the probe will reach speeds of up to 692,000 km/h.
Or 429,989 (call it 430,00) mi/h. Or 0.06% the speed of light!
[The main reason for this note was to remind myself to do the math for both miles per hour and the fraction of light speed.]
Burning Iron
This was new to me. I knew iron did burn, but had no idea it could be burned, not just productively, but renewably. Impressive technology.
Burning iron requires temperatures of around 1800°C, roughly the same as is reached in a coal-fired power plant.
Or 3,272 degrees Fahrenheit!
Iron has an energy density of approximately 9700Wh per litre, higher than petrol’s energy density of around 9400Wh/l.
That was an eye-opener.
The burning of iron, on the other hand, produces no CO2, but instead iron oxide, better known as rust. However, rust is not just waste. By adding hydrogen, rust can be turned back into iron powder – and burned again in a power plant.
So, we can keep reusing the iron (plus no CO2 emissions).
5-6% of Earth’s crust is iron, making it the fourth most abundant element after oxygen, silicon, and magnesium.
And the stuff isn’t rare. (Lot of it in space, too. The atomic characteristics of iron as such that it’s the heaviest element stars can fuse from lighter elements — starting, of course, with hydrogen. All heavier elements were created, if not in labs on Earth, by supernovae and neutron star collisions. But making iron is the last life-stage of an active star.)
Team SOLID set up a small 100kW iron power plant to provide heat for the brewery’s various processes, and in 2020 the first beer was produced – with zero CO2 emissions.
They burned iron to make craft beer! About as win-win as it gets.
When hydrogen is added to oxidised (i.e. rusty) iron, oxygen molecules on the rust grains attach themselves to hydrogen molecules, forming water vapour and some new iron. This regenerates the iron powder to a state in which it can be burned again in the iron power plant – a result that might seem to let us recycle iron waste almost indefinitely.
Cool! And it packs more punch per liter (or litre) than gasoline. So, will we be seeing iron-burning cars or trucks?
Iron containing a given quantity of energy will weigh about 10 times more than petrol, which is why it is not practical to pour it into the tank, as the car will simply be too heavy.
Oh, I guess not. Maybe in an apocalypse story with a car that needs constant refueling but burns scrap iron left over from civilization.
The Local Hole
I knew we lived in a local void in our galaxy, but didn’t know we also lived in a much larger one:
Astronomers call it the “local hole”, but that is quite the understatement. It is vast, gigantic, enormously huge – although, in truth, adjectives fail us when it comes to this expanse of nothingness. It is the largest cosmic void we know of, spanning 2 billion light years. Our galaxy happens to be near its centre, but the trouble with this hole isn’t that it presents a proximate danger – more that it shouldn’t exist at all.
As with our galactic void, does this cosmic void have anything to do with the odds of intelligent life forming here? Would a more active or crowded part of the universe be less conducive to the evolution of intelligence?
The bottom line is that we don’t expect to see voids or structures wider than about 1.2 billion light years.
Assuming we understand how the universe formed and is. Which, if we do well enough, is a problem. Where did this void come from?
[Ryan Keenan, Amy Barger, Lennox Cowie] found that we live in a 2-billion-light-year-wide expanse of space with a density about 20 per cent lower than average.
That’s pretty significant. That’s one-in-five.
Oversized voids and structures aren’t the only problem facing the standard model of cosmology. There is also the Hubble tension.
True, but some very recent results suggest a resolution to that tension.
On top of that, there is the issue of “bulk flows”, which refers to the way streams of galaxies move. Last year, astronomer Brent Tully [… and] colleagues observed these flows in the KBC void and found they were four times as fast as the standard model of cosmology predicts.
And, as I’ve mentioned many times, we do seem to be unusual in the universe as a whole. Unusual on a number of counts outside our being an intelligent (for a sometimes-broad definition of intelligent) species.
The discovery of the KBC void points in the opposite direction; it makes us unusual.
As I have said more than once, contrary to the Copernican principle, on multiple counts, far from being ubiquitous, we can make a claim to being unusual, even unique.
Neuromorphic Chips
These are chips that physically resemble brain neurons. They are a hardware solution for AI rather than the software solutions represented by large-language models (such as ChatGPT).
It also means brain-mimicking computers can be more energy efficient, with Intel claiming its new Hala Point neuromorphic computer uses 100th of the energy a conventional machine takes when running optimisation problems, which involve finding the best solution to a problem given certain constraints.
One huge selling point of neuromorphic chips is that, as with actual neurons, they aren’t the energy hogs that conventional chips are.
There is also the promise of implementing in physical structure, rather than numeric simulation, the human brain. I’ve long been dubious of software implementations of the mind, but I’ve equally long thought replicating the physical structure of the brain was a viable approach.
Hala Point contains 1.15 billion artificial neurons across 1152 Loihi 2 chips, and is capable of 380 trillion synaptic operations per second. Mike Davies at Intel says that despite this power, Hala Point is about as big as a microwave oven.
For comparison, a human brain has 100 billion neurons and, on average, 500 trillion synapses.
I’m not sure what the article means by “synaptic operations per second.” Does it take one second to do that many — opposed to the human brain in which somewhat more synapses are firing constantly — or is that the rate all their synapses fire at in parallel (which would way outdo the speed of the brain). I suspect it’s the former, that each of their synapses requires a separate update by the CPU in a round robin fashion.
While today’s neuromorphic hardware isn’t suited for training large AI models, says Davies, it could one day take pre-trained models and let them learn new tasks. “This is the kind of continual learning problem that we believe large-scale neuromorphic systems like Hala Point can, in the future, solve,” he says.
That, too, would be huge. One issue with current LLMs is that, firstly, they’re expensive to train, and secondly, once deployed on (much, much) smaller systems, their behavior is largely fixed. They don’t adapt or learn.
With neuromorphic chips, we might be on the path to building our own version of Cmdr. Data, from Star Trek. (His “positronic brain” being an homage to Isaac Asimov’s Robot stories. As described, it did sound more like a structural replication of a brain than a general-purpose computing device.)
AI Art
Speaking of AI, this article about artists who used AI to generate content:
The researchers tracked 4 million works published on the platform by 53,000 users. The people self-divided into those who continued to work using traditional methods – a sort of control group – and those who adopted AI. The latter, who numbered about 5800, were singled out by the use of tags on their work such as “AI-generated” or the names of the AI tools. Works posted into AI art communities on the site were also included.
Just one platform, and plenty of assumptions, but a good-sized N.
Users who adopted AI tools saw their productivity – measured by the number of works posted – increase by 25 per cent over the study period. They also saw a 50 per cent rise in the number of “favourites” their work received over six months. But novelty, measured by the subject matter and specific details of the work, decreased for the AI-using group.
The implication being that such a sophisticated helpmate for creative work makes you less interesting, or at least less diverse, as an artist. Left to your own devices, you’re more prone to explore new space.
He believes human artists using AI might have found a subculture within the platform that is accepting of this kind of work. “Or it could be that the quality of the artwork is potentially indiscernible from those of the traditional artists,” he says.
The above seeking to understand large rise in favorites among AI-assisted art. Maybe some just like it — find it new in its own way, perhaps even in just the novelty of the idea. I’ve found otherwise; I don’t care for AI art. The random images Microsoft throws out for my login screen no longer interest me because I’m not sure they’re real anymore.
Zhou is also considering these questions. “Generative technologies are essentially giving everyone the same baseline skill,” he says. “It accelerates the ability to produce. But it raises other issues: are we foregoing the process of understanding what goes into being creative and producing something meaningful, in favour of just being able to brute force our way with technology?”
Very good questions. Especially about what it does to our skillsets. Does it make us all essentially search engine users?
I do have serious concerns about the coming AI Revolution, but I don’t doubt its inevitability. As with all our major revolutions (Agricultural, Scientific, Industrial, Electronic, etc), it will bring a vast bounty of benefits along with a dismaying cost only fully realized in retrospect. But that’s humanity for you.
§
Stay scientific, my friends! Go forth and spread beauty and light.
∇












May 26th, 2024 at 9:15 am
[…] the Science Notes post published last Friday, I didn’t have room for the last article that caught my eye in recent issues of New […]