Mark Oliphant was one of the engineers of the Manhattan Project, but later became a campaigner for peace. Jeff Glorfeld reports. Original Link
First results shed light on antibiotic resistance. Andrew Masterson reports. Original Link
Anomaly at ultra-cold temperatures suggests a way forward for superconductors. Phil Dooley reports. Original Link
Anomalous experimental results hint at the possibility of a fourth kind of neutrino, but more data only makes the situation more confusing. Katie Mack explains. Original Link
Anomalous experimental results hint at the possibility of a fourth kind of neutrino, but more data only makes the situation more confusing. Katie Mack explains. Original Link
Chinese researchers reveal proof-of-concept for next-gen renewables. Vhairi Mackintosh reports. Original Link
The world’s largest particle accelerator has been going for a decade. Physicist Todd Adams from Florida State University in the US sums up the progress so far. Original Link
A waste product from the textile industry could revolutionise battery design. Phil Dooley reports. Original Link
The physics of ant colonies yields clues for robot swarms and road traffic control. Natalie Parletta reports. Original Link
Biomedical engineer Kathryn Margery Spiers travelled from Australia to Germany to work at the world’s biggest synchrotron. She spoke to Gabriella Bernardi. Original Link
A radical simplification of economic forecasting models produces interesting, if controversial, results. Michael Lucy reports. Original Link
One of the major fundamental questions in physics concerns the presence or absence of free will in the universe, or in any physical system, or subset, within it.
Physics is based on the idea that nature is mechanistic, which means that it works like a machine. A machine is just a system, and therefore, by definition, it is a collection of elements, each of them with a specific, possibly different function, all working together to achieve a specific purpose, general to the whole machine.
For example, a musical ensemble is a system of people, each playing a different musical score, and all controlled by a director, such that, as a whole, the group can correctly execute a specific melody. In physics we study the systems found in nature by constructing a model that represents them as realistically as desired.
Let’s now consider the whole universe as a system: according to the best of our knowledge, material particles can never be measured closer to each other than the Planck length (the number 16 preceded by 34 zeroes and a decimal point, or trillionths of trillionths of trillionths of one meter), suggesting that, for matter, states are discrete.
Even if energy does not seem to suffer from a similar Planck length limitation, we know from quantum mechanics that energy is transferred between physical bodies in discrete amounts, known as quanta. Hence, either Planck length or energy quanta can be considered as the relative sizes of the “pixels” composing the universe. However, this description seems incomplete.
If we take a continuous function, like a straight line – y = a * x + b – where a and b are fixed parameters and x can assume arbitrary values, we know that for arbitrarily, infinitesimally close values of x, we will still be able to compute the respective values of y without confusion. The trajectory of an apple falling from a tree can be modelled by a continuous straight line that approximates an otherwise fractal curve.
Fractals curves are recursive functions that require an initial state. Fractal curves require a continuous world to evolve, especially if we know that the initial condition must be an irrational number. From this, we can deduce that the behaviour of physical objects seem to be ultimately dictated by continuous functions somehow perceived by humans through a grid of these “pixels”. It is self-evident that we always need to divide such things into discrete sections, anyway.
If we believe in the Big Bang Theory – and the universe’s continuous expansion is a strong indication that such theory must be correct – the initial state of the universe was a single point (known as a singularity) that then expanded to the cosmos we know and perceive today, which, of course, includes us.
If so, there is a causal relationship between the Big Bang and us. In other words, free will is not allowed, and all of our actions are just a mere consequence of that first event. Such a view is known as “determinism”, or “super-determinism” (if one finds it productive to reinvent the wheel).
If we believe the initial state of the universe to be quantified by a rational number, we are inferring that it is periodic, non-chaotic and globally predictable in nature. But if the initial state is rather quantified by an irrational number, we are instead inferring that the universe is aperiodic, chaotic and therefore only locally predictable in nature.
Today, we know that the universe is chaotic.
From such a view, one can be tempted to interject that if free will does not exist, why do we punish criminals? It is not their fault, after all. A counter-argument to that is that punishment is the natural response to crime, such that global equilibrium can be sustained, and therefore punishment is just as unavoidable as the commission of wrongdoing.
Because the cosmos is clearly chaotic, we can observe time-reversibility only locally, rather than globally. This in turn means that free will is an inevitable illusion for us humans, due to our subjective perception of the universe, rather than its innermost nature.
As electronic devices are being scaled smaller and smaller, scientists are beginning to hit the limits of what silicon-based transistors can achieve. The field of molecular electronics offers a way around this problem, by using single molecules, each of the exact same chemical make up, to construct electronic devices.
While this can be demonstrated in the lab, it’s not practical on larger scale, primarily because of the difficulty in making reliable electrical contacts between molecules and conducting metals to create circuits. Essentially, the components exist, but it’s the “wiring” of the system that causes a bottleneck.
Publishing in the journal Nature, a collaborative team of IBM scientists and researchers from the Universities of Basel and Zurich, Switzerland and Macquarie University in Australia have developed a technique for fabricating molecular electronics that appears to solve this problem.
Manipulating individual molecules is an increasingly possible, yet still daunting task for chemists, and currently impossible on a commercial scale. The Swiss-led team have addressed this problem using a property possessed by some molecules called self-assembly.
Molecular self-assembly is a process where molecules can arrange themselves into complex and well defined arrangements, without the control of an outside force. A good example of this is lipid bi-layers, in which the fats that make up the walls of biological cells will assemble into two-molecule-thick layers when placed in water.
The research team, led by Basel’s Gabriel Puebla-Hellmann, used a similar type of self-assembly to deposit a single-molecule layer onto a platinum surface. The molecules comprised chains of carbon atoms, with a sulfur group on each end.
The chains self-assembled onto the metal surface, standing upright, with one sulfur atom attached to the platinum and the other exposed. The effect was a little like a dense clump of reeds, albeit at a much, much smaller scale.
With this single-molecule layer complete, the researchers could then create a second metal layer on top, by passing a solution of gold nanoparticles over it. The nanoparticles adhered to the exposed sulfur atoms, creating a new metallic layer. This created a sandwich-like metal-metal structure, separated by only a film of molecules.
The team went on to fabricate miniature electronic devices by coating a surface of platinum with an insulating layer, and then etching tiny pores only 60 nanometres wide into the insulator, exposing the metal.
After depositing the single-molecule layer and gold nanoparticles inside the pores, the gold was further coated with conducting metal by a conventional technique called vapour deposition.
The work may open the way for molecular electronic devices such as ultra-miniaturised transistors, and also lead to artificial neurons able to exploit quantum effects.
“Molecular electronics hasn’t previously lived up to expectations, but we’ve seen a renaissance of the field in the last five to six years,” says Koushik Venkatesan, from Macquarie, one of the authors of the study.
“The device platform is the missing link. We hope work like ours will accelerate this type of technology.”
One of the highest energy neutrinos ever detected probably came from a distant galaxy with a jet of plasma spewing directly at us, an international team of scientists from 16 observatories has announced in the journal Science.
Neutrinos are the smallest fundamental particle – more than a million times lighter than an electron – and are very difficult to detect. Although copious numbers are created in violent cosmic events and reach the Earth, most pass straight through detectors – and indeed the entire Earth.
Nonetheless IceCube, an array of detectors buried in the ice in Antarctica, has detected several hundred high-energy neutrinos since it began operating in 2010, but none of them could be pinned to a specific source.
However, in 2017 a new alert system was set up at facility, so when it clocked a neutrino in September with an energy of nearly 300 tera-electronvolts – the equivalent energy of 20 trillion car batteries connected together – a message was broadcast to observatories around the globe that a high-energy particle had been detected.
As telescopes swiveled in the direction indicated by IceCube, astronomers realised the source was an ultra-compact type of quasar, or blazar, called TXS 0506+056: a violent galaxy four billion light years away, with a powerful black hole at its heart that was sucking in matter and shooting out jets of plasma, one of which is aimed straight at the Earth.
Astronomers had seen gamma rays emitted by TXS 0506+056 before, but now it was flaring violently, emitting rays more intensely than in the past. As they studied the blazar more closely they realised that despite its brightness, it was further away than they had thought, says Gary Hill, a member of the IceCube collaboration based at University of Adelaide in Australia.
“We didn’t know initially how far away it was,” he explains. “Four billion light years is pretty distant, so it is intrinsically extremely luminous, probably one of the most luminous blazars ever measured.”
“So this neutrino could have been travelling to us since around the formation of the Earth.”
The team theorise that TXS 0506+056’s jet of plasma accelerated protons to extreme energies by bouncing the particles back and forth like two tennis players whacking a ball.
For example, the flare up in energy could have been caused as some dense matter fell into the black hole, sending a shockwave out through the blazar’s jet.
The fast-moving pulse behind the shockwave shoots protons ahead of it – like a tennis serve over the shockwave net. Magnetic fields in the slower-moving plasma in front of the jet return the serve through the oncoming shockwave, only to have the protons volleyed back by the magnetic fields behind it, with an extra dose of energy.
Unlike terrestrial tennis racquets the magnetic plasma reflections cause no loss of energy, so the protons move faster and faster.
The process ends when the proton escapes, or collides with dust, gas, or photons and forms a neutrino via the production of a short-lived particle called a pion. The reaction has been often recorded at the Large Hadron Collider, although at about one hundredth the energy measured by IceCube.
The IceCube team then decided to go through their previous eight years of data to see if other high-energy neutrinos had originated from the same part of the sky. With careful sifting, they found that in the period September 2014 to March 2015 there were about 15 more detections than would be expected from a random distribution.
The team also published these findings in Science at the same time as the single 2017 neutrino detection.
The combination of the gamma rays and the observation of a burst of neutrinos in the recent past seems to add up to a link between the particle and TXS 0506+056, but Hill hopes lots of observatories will catch the blazar’s next flare and put the issue beyond doubt.
“We’ll be scrutinising this thing forever, now,” he says.
Although not a property exclusive to human beings, the ability to delay gratification is nevertheless a foundational characteristic of our species – the smart members of it, anyway. And Kip Thorne, Feynman professor emeritus at the California Institute of Technology in the US, is very smart indeed.
As a highly respected theoretical physicist, and joint founder in 1984 of the Laser Interferometer Gravitational-Wave Observatory (LIGO), Thorne could have been forgiven for giving way to a rush of excitement on one particular morning when he opened his computer.
“I had an email that said go look at a particular internal LIGO website, we may have a gravitational wave detection,” he recalls.
Thorne, however, has been a hardcore physicist ever since he was awarded his first degree back in 1962. Physics, like all science, is based on doubt, on demanding evidence and then testing it to breaking point.
This is especially so in the matter of gravitational waves – ripples in space-time predicted by Einstein in his Theory of General Relativity, and the principal focus of Thorne’s entire career. The existence of the waves is well supported theoretically, but theory without evidence will always be suspect.
Perhaps, as he looked at the email that morning, he had a fleeting memory of another American physicist, Joseph Weber, who claimed in 1970 to have detected gravitational waves, only to have his work discredited.
The stakes were high. A premature announcement could well result in professional disaster. Rushed reveals in physics can backfire badly, as in 2011 when scientists at the Oscillation Project with Emulsion-tRacking Apparatus (OPERA) project based at CERN announced they had observed neutrinos travelling faster than light, only to discover the result was produced by an improperly attached fibre optic cable and a clock ticking too fast.
And at first blush, when Thorne did in fact look at the internal website, his reluctance to credit the claim appeared wise.
“I looked at it,” he says, “and the data were almost too good to be true. I was suspicious, and I think essentially everybody else was suspicious, too, because the signal was rather strong and it was quite perfect and it was in such beautiful agreement between the two detectors.
“We had expected that our first signal would be so weak that you would not be able to see it beneath all the noise. We did not think that we would be able to see it by eye in the raw data – but this we could. We were very cautious.”
So Thorne and his colleagues embarked on a long and exacting examination of the mechanics of the detection.
“It was not until after several months,” he says, “when the best experts on the instruments had gone through a large number of auxiliary data channels that told us what’s going on inside the instrument, and had seen that there was absolutely nothing wrong in there, and absolutely no evidence that we had been hacked, it was not until then that I began to firmly believe this was real.”
And it was then, and only then, that he permitted his long-delayed gratification free rein.
“At that point, my own reaction was just a sense of profound satisfaction,” he says.
“I had put something like 60% of my career research efforts into this and I had made the right bets, and pushed in the right directions. It was just great satisfaction.”
It was also a straight run to a Nobel Prize, which he shared in 2017 with fellow LIGO founder Rainer Weiss, and Barry Barish, one of the project’s principal investigators, “for decisive contributions to the LIGO detector and the observation of gravitational waves”.
But, unlike some other theoretical physicists at the top of their game, Thorne has maintained a relatively low public profile. His forays into popular culture have been modest, if consequential. Years ago, he threw ideas about wormholes at Carl Sagan which ended up in Sagan’s bestselling novel (and the subsequent Hollywood film), Contact.
More recently, he collaborated with movie producer Lynda Obst to develop the theoretical framework for the Christopher Nolan feature, Interstellar, and also wrote the accompanying book.
And now, he’s embarking on a public speaking tour in Australia, called What’s Next?, sharing the stage with the Royal Institution of Australia’s lead scientist, astronomer Alan Duffy, and British comedian Robin Ince (co-host, with Brian Cox, of the BBC radio show The Infinite Monkey Cage).
No doubt his work on gravitational waves will be much to the fore in the shows, but his secondary field of interest – time travel – is likely to be a popular audience topic.
Across several books and papers over the years, Thorne has pondered the theoretical possibilities that wormholes – an idea arising from Einstein’s work, that describes tunnels between two widely separated points in space – could be used for travel through space and time.
One problem with this, of course, is that wormholes may not exist – a matter he conceded in his 2014 Interstellar tie-in, when he wrote, “We see no objects in our universe that could become wormholes as they age.”
On the other hand, perhaps they are there and we simply haven’t seen them yet. Or perhaps they are too small to see – Thorne suggests that tiny wormholes might exist within a possible cosmological structure he calls “quantum foam”.
The mundane matter of existence aside, however, there is no question that wormholes and time travel both exercise the imaginations of a lot of science fiction writers. And in this respect, Thorne’s calculations spell trouble.
A favourite trope of sci-fi is the “grandfather paradox” – the notion of a man who travels back in time and (accidentally or otherwise) kills his own grandfather, thus obliterating his own existence.
In work published as early as 1991 (although clearly not read by the writers of Red Dwarf and Dr Who) Thorne and fellow researchers showed that such paradoxes would not – indeed, cannot – arise, permitting instead an infinite number of other possible outcomes.
His findings were used by US science fiction author Larry Niven for his short story collection Rainbow Mars. The dashed hopes of myriad other writers, however, are of little concern to Thorne. All that really matters is the outcome of the research.
“I don’t think that I and my collaborators have actually proved that such effects cannot arise, but I do think we’ve provided strong evidence,” he says. “I think we have an understanding of why they can’t arise.”
You get the feeling that for all his comparatively low public profile, Kip Thorne is a man who enjoys pushing ideas as far as they will go. In 1975, for instance, he and astronomer Anna Zytow suggested that it is entirely possible for a small dense star to fall into a very big not-so-dense one and survive intact, producing a star-within-a-star. He called the theoretical result a Thorne-Zytkow Object (TZO).
In 2014, a team of astrophysicists led by Emily Levesque at the University of Colorado at Boulder announced it had found one.
At which point, Thorne had another of those delayed gratification moments.
“Certainly, in any piece of research that I’ve done where I have a new result that I don’t think anyone has seen before, I also feel a sense of satisfaction,” he says.
“But not at the level of the discovery of gravitational waves. That’s a level of satisfaction that goes far beyond anything I’ve ever experienced before in my life.”
When it comes to making computer circuits, silicon is king. Contenders for the throne include optical switches, DNA, proteins, germanium and graphene. Each has legitimate grounds to be considered but has struggled in development, while silicon-based computers have relentlessly improved, such that the performance gap between silicon and its potential usurpers has widened.
A similar widening gap has occurred in rechargeable batteries. If silicon is the king of electronic circuits, lithium is the queen of batteries. I find it surprising that one element so thoroughly dominates battery technology but, like king silicon for circuits, queen lithium has properties that make it superior to all the alternatives.
Batteries comprise three essential components: the negative terminal (also known as the anode), the positive terminal (the cathode) and the interior soup of ions called the electrolyte, usually a liquid or gel. When the negative and positive terminals are connected through an external circuit such as your flashlight, the battery discharges by driving electrons from the negative terminal to the positive terminal, providing the energy to generate light. Inside the battery, the circuit is completed by the flow of positive ions through the electrolyte. In a lithium-ion battery, these positive ions are lithium atoms that have been stripped of an electron.
Most of the billions of dollars spent each year to develop better lithium-ion batteries are invested in improving the materials for the terminals and the electrolyte. The negative terminal has to act like a sponge, absorbing and storing as many positively charged lithium ions as possible. Graphite is most commonly used, but variations such as graphene that can absorb even more lithium ions without swelling are actively being sought. The positive terminal is often made from lithium cobalt oxide, though there are many alternatives in production and in development. The electrolyte includes lithium salts such as lithium hexafluorophosphate.
What makes lithium special? For starters, it is the lightest of all metals and the third-lightest element, sitting in the periodic table immediately after hydrogen and helium. Further, of the metals commonly used for batteries, lithium has the highest ‘working voltage’ – the voltage difference between the negative terminal and the positive terminal.
What makes lithium special? For starters, it is the lightest metal.
This combination of a high working voltage (up to 3.6 volts) and light weight contributes to lithium batteries having the highest energy storage per kilogram, making them ideal for mobile applications. Thanks to lithium-ion (Li-ion) batteries, a Tesla car can get away with a battery weight of 600 kg, compared with 4,000 kg or more if it were to rely on conventional lead acid batteries.
Unlike lead acid batteries, in use for more than a century, lithium-ion batteries can be discharged down to about 10% of their rated capacity without failure, and do so thousands of times. They do not carry the curse of the memory effect that reduces the working lifetime in nickel-cadmium (NiCd) and nickel-metal hydride (NiMH) batteries unless they are fully discharged before recharging. Further, lithium-ion batteries left sitting on the shelf will lose charge at a much slower rate than other battery chemistries.
Like any chemical at high concentrations, lithium is harmful to humans if ingested but is otherwise very low on the toxicity scale; in fact, it is so relatively harmless that for more than 50 years lithium carbonate salt has been routinely used as a medicine to treat bipolar disorder.
Wonderful as they are, lithium-ion batteries do have some drawbacks. For example, they lose peak capacity after a few years of operation. Further, because of safety concerns, battery packs must be made with complex protection circuits to limit overheating and maximum currents.
It is difficult to predict where the next big battery breakthrough will come from. The competition is intense and advances are announced daily. My money is on replacing the liquid electrolyte with a solid-state electrolyte that is a kind of glass. If successful, the solid-state electrolyte will allow faster charging, increased safety, up to three times the energy density and longer lifetimes.
Sound too good to be true? The latest announcement from Toyota Motor Corporation confidently claims it will introduce solid-state lithium-ion batteries in 2022. While it is early days, solid-state lithium-ion batteries can provide a step change in performance that will give us electric cars able to go 1,000 km and smartphones that can be used for several days between charges.
Natalie Vanessa Boyou, Universiti Teknologi Malaysia
The Performance of Degraded Water-Based Mud with Nanosilica in Wellbore Cleaning
“The stability of drilling fluid has to be maintained in order for it to circulate throughout the drilling process without problems that could jeopardise safety or budget of the operation. Deep drilling operations are pushing researchers and engineers to come up with a drilling fluid recipe that does not deteriorate under extreme conditions. This study focused on the lifting performance of degraded water-based drilling mud with nanosilica. Drilling fluid with nanosilica performed better than drilling fluids without nanosilica at high temperatures which allowed the former mud system to transport cuttings more efficiently to the surface.”
The finals of the 2017 Asia-Pacific Three-Minute Thesis (3MT) competition, which challenges PhD students to communicate their research in a snappy three-minute presentation, were held on the 29 September at the University of Queensland, St Lucia Campus. Competitors came from 55 Universities from across Australia, New Zealand and North and South-East Asia.
The presentations were judged by distinguished figures in Australian science including Cosmos editor-in-chief Elizabeth Finkel.
A Russian physicist living in Rome has analysed pizza-making and found why wood ovens are superior to electric ones: they give the perfect balance of well-cooked base and browned toppings.
Andrey Varlamov researches superconductivity at Italy’s Consiglio Nazionale della Ricerche (CRN). One day, he struck up a conversation with the pizzaiolos (pizza-makers) at his local café about the niceties of oven temperature. He was inspired to analyse heat flow between the brick base of the oven and pizza dough.
In doing so, he enlisted the help of materials scientist Andreas Glatz of the US Argonne National Laboratory, and Italian food anthropologist Sergio Grasso. Their findings, currently awaiting peer review on the pre-print site Arxiv, reveal why Italians insist on cafes with wood-fired ovens.
“I succeeded in explaining for myself Italian traditional behaviour,” Varlomov says. “It’s not just conservatism, it is the experience of hundreds of generations.”
The key, he and his colleagues found, was getting the correct balance of heat flowing into the base of the pizza via conduction from the bricks below, while using radiant heat to warm the toppings.
The local pizzaiolos divulged to Varlamov that the magic formula in the wood-fired oven was two minutes at 330 degrees Celsius. He calculated that in this scenario the temperature where the dough touched the bricks would be 210 degrees.
Varlamov then repeated the calculation for an electric oven, made of steel, and found that because of the higher thermal conductivity of the metal the temperature under the pizza would be much higher, around 300 degrees.
“That is too much! The pizza will just turn to coal!” Varlomov writes in the paper.
To achieve the perfect under-dough temperature, the electric oven needs to be reduced to 230 degrees. The problem is that the lower ambient temperature means there is now less radiation to cook the toppings.
The amount of radiant heat emitted is subject to what physicists call the Stefan-Boltzman Law, which describes its proportionate relationship to the absolute temperature of any particular body.
The 100 degree drop in electric oven temperature thus, Varlamov and colleagues calculate, reduces the radiation by half. This means that the toppings will need double the cooking time, compared to that required in a wood-fired oven, and still will not have the authentic taste.
“You need the radiation from above, because it should be crisp,” the researchers write, while noting that electric ovens also change the sensory properties of pizza toppings in not altogether happy ways that wood-fired cookers don’t.
Varlamov’s study in Rome is ongoing. He is also researching the cooking of pasta, and the making of coffee. He cites “love” as one of the main motivations for Italian baristas to create great coffee for 70 cents. In contrast, Parisians use the same types of Italian-made coffee machines, and charge five euros for a cup of “liquid shit”.
The fundamental principle at the base of Einstein’s Theory of General Relativity – that all objects in freefall accelerate identically – has been verified on a stellar scale.
Einstein predicted that all objects behave identically when falling in an external gravitational field. For his theory to remain unchallenged it has long been thought critical to establish that the principle holds at all scales within the universe.
In particular, it was felt important to test whether the prediction plays out even in systems involving objects with strong “self-gravity” – that is, where the combined gravity of the constituents within an object, which thus hold it together, is extremely powerful. Neutron stars are examples of stellar objects with strong self-gravity.
Interestingly enough, despite their huge size, most objects in the universe – the majority of planets, stars and even galaxies, for instance – do not exert powerful enough self-gravity to able to be used as test subjects for the envelope-pushing extremes of Einstein’s theory.
Calculations arising from NASA’s MESSENGER mission to Mercury, which wound up in 2015, have been used to verify general relativity in relation to that planet’s orbit around the sun. However, on cosmic scales, the self-gravity of the bodies involved was too weak to test the predictions at their upper limits.
Similarly, the weak gravitational pull of the Milky Way limited the precision of attempts to test the theory using measurements obtained from pulsar and white dwarf binary star systems.
Now, however, a team of astronomers led by Anne Archibald from the University of Amsterdam in the Netherlands, has found a way around such limitations.
In a paper published in the journal Nature, she and her colleagues report results arising from the careful measurement of a triple star system, called PSR J0337+1715.
The system was discovered in 2014, and comprises a pulsar, or neutron star – a super-dense object with, thus, extremely strong self-gravity – and two white dwarf stars.
One of the white dwarfs is locked in a tight 1.6 day orbit around the pulsar. The pair then take 327 days to orbit the second white dwarf. The whole system completes its dual orbits in an area smaller than the space described by Earth’s orbit around the sun.
Archibald and colleagues studied the system to detect how the gravitational pull of the outer white dwarf affected the orbits of the inner one and the pulsar. If Einstein’s predictions were incorrect – that is, if they did not describe the behaviour of objects with very strong self-gravity – then the pulsar and the inner white dwarf should have behaved differently.
They didn’t. The scientists found that the measurements for the accelerations of the two bodies had a “fractional difference” of just 2.6 millionths. The result, they write, establishes the universality of freefall to a measurement “almost a thousand [times] smaller than that obtained from other strong-field tests”.
In its obituary for Marie Curie, who died on July 4, 1934, The New York Times wrote: “Few persons contributed more to the general welfare of mankind and to the advancement of science than the modest, self-effacing woman whom the world knew as Madam Curie. Her epoch-making discoveries of polonium and radium, the subsequent honours that were bestowed upon her – she was the only person to receive two Nobel prizes – and the fortunes that could have been hers had she wanted them, did not change her mode of life.
“She remained a worker in the cause of science … And thus she not only conquered great secrets of science but the hearts of the people the world over.”
Born Maria Sklodowska in Warsaw, Poland, on November 7, 1867, Marie Curie became the first woman to win a Nobel prize and, as The Times noted, at the time she was the only person to win the award twice.
In 1891 she went to Paris and studied at the Sorbonne, where she was recognised in physics and mathematics. She met Pierre Curie, professor in the School of Physics, in 1894 and they were married the following year. She succeeded her husband as head of the physics laboratory at the Sorbonne, gained her Doctor of Science degree in 1903, and following Pierre’s death in 1906 took his place as professor of general physics in the Faculty of Sciences. It was the first time a woman had held the position.
The Curies built upon the work of French physicist Henri Becquerel, who in 1896 had been investigating X-rays, which had been discovered the previous year.
According to Nobelprize.org, “By accident, [Becquerel] discovered that uranium salts spontaneously emit a penetrating radiation that can be registered on a photographic plate. Further studies made it clear that this radiation was something new and not X-ray radiation.”
The Curies took Becquerel’s work a few steps further. Marie was studying uranium rays and found they were not dependent on the uranium’s form, but on its atomic structure. Her theory created a new field of study, atomic physics. She coined the phrase “radioactivity”.
Marie and Pierre worked with the mineral pitchblende, a form of the crystalline uranium oxide mineral uraninite, which is about 50 to 80% uranium. Through this research, they discovered the radioactive elements polonium and radium. In 1902 the Curies announced that they had produced a decigram of pure radium, demonstrating its existence as a unique chemical element.
In 1903, Marie and her husband won the Nobel prize in physics for their work on radioactivity. In 1911, Marie won her second Nobel, this time in chemistry.
By the late 1920s her health was beginning to deteriorate. She died from leukaemia, caused by exposure to high-energy radiation from her research. The Curies’ eldest daughter Irene was also a scientist, and also won a Nobel prize for chemistry.
The Queensland Museum, situated in the Australian city of Brisbane, is set to throw open the doors to its new interactive science gallery named SparkLab on Monday September 17.
This $9.4 million venture is a rejuvenated version of the museum’s existing science centre and promises to be an unparalleled experience for children and adults. It boasts 40 flexible displays and exhibits, spread across three zones that allow visitors to explore and experience how science, technology, engineering and maths (STEM) form an integral part of our daily life.
SparkLab was born from a strategic partnership with Science Museum London, and took its inspiration from the hands-on Wonderlab galleries housed there.
It aims to promote inquisitiveness and inspire awe about the world we live in, and ultimately prepare young people to adopt STEM jobs in the future. It will encourage visitors to employ techniques used by scientists and innovators to create and evaluate their own experiments and answer their own questions in the “maker space”. There is a “science bar”, which will allow demonstrations. Also featured will be new, ongoing science shows.
There are plans to build similar interactive science centres in other parts of Queensland, including Ipswich, Townsville and Toowoomba. According to Jim Thomson, the Queensland Museum Network acting CEO and director, this will occur in collaboration with the electricity company Energy Queensland and its affiliates.
SparkLab is a timed exhibit, and tickets can be purchased online.
There are special discounted tickets for schools and groups that include children aged six to 13 years, with teacher and supervisor previews also available.
As the Cold War emerged in the West following World War II, basic scientific research became a high priority for the United States. The US government established the Department of Defence and National Science Foundation, and research spending increased afterwards from $265 million in 1953, to approximately $35 billion by 2015 – a 15-fold increase when adjusted for inflation. According to an index of data compiled by the journal Nature, the investment has paid off well.
For the last three years, the Nature Index has measured how frequently countries and institutions contribute to 82 of the most prestigious publications in the natural sciences. The 2017 data was released in June, showing that the US continues to lead all other countries by a wide margin, as it has for the better part of a century. China, Germany and the UK follow, but these three countries combined still contributed less to science than the US last year.
However, the data also reveals that the US is quickly losing ground. Contributions to major scientific journals by American scientists have steadily declined over the past few years, while contributions from China are on the rise. If these two opposing trends are sustained, China’s scientific output will eclipse that of the US within seven years.
This trend is not surprising to some. The US National Academy of Sciences, National Academy of Engineering and Institute of Medicine issued a joint report back in 2007 that warned of an eroding US influence in science and technology in the wake of China’s economic growth.
A similar, but more emphatic, report followed in 2010 with the resonant conclusion that, because of their waning influence on the frontiers of science, “the United States appears to be on a course that will lead to a declining, not growing, standard of living for our children and grandchildren”. The authors recommended 20 specific investments in STEM education and research funding that would stimulate technological advancement and avert this scenario.
A Nature Index graph showing the US, Germany, UK and Japan heading south, and China heading north.
If the United States has failed to heed these recommendations, then clearly China has not.
According to a 2014 study published in the journal Proceedings of the National Academy of Sciences (PNAS), there are four major factors that are driving China’s rapid ascent into scientific preeminence.
First, the nation has a large population, which means that it has an enormous capacity for labour growth in science and engineering occupations. Although the proportion of workers in the sciences and engineering is lower in China than it is in the US, the sheer numbers are comparable because China’s population is four times larger.
Second, the number of Chinese scientists and engineers is growing. China has made a significant investment in expanding education since 1999 and consequently doubled the number of higher education institutions in the decade that followed.
Along with the increase in schools, there is also an increase in Chinese students studying STEM. In 2010, 44% of higher education students in China were majoring in science or engineering, compared with only 16% in the US. This is partly driven by economics. In China, scientists earn more than doctors and lawyers. Meanwhile, the opposite is true in the US, where passionate scientists are relegated to a state of relative poverty in comparison to professionals in other occupations that require a similar amount of training.
Third, China has aggressively recruited notable senior-level Chinese scientists from abroad to return to their homeland. In 2008, the Chinese government launched The Recruitment Program of Global Experts as part of the Thousand Talents Program, which was designed to pry outstanding Chinese academics away from their positions at high-tier foreign research institutions. Lured by prestigious programs such as this, and the lucrative salaries and start-up packages that come with it, more than a few Chinese-American scientists have left the departments they once chaired at elite universities in the US and taken their talents to China.
Finally, China is increasing investments in science. In 1991, China put 0.7% of its gross domestic product into research and development, and by 2016 they tripled that investment to 2.1%. Meanwhile in the US, gross expenditures in research and development (GERD) have wavered somewhere between 2.4% and 2.9% since 1991. Although China still falls behind in this category, it is catching up very quickly.
The rise of China means certain change, but that does not necessitate the collapse of the US or any other nation, according to the study’s authors.
“Today’s world of science may be characterised as having multiple centres of scientific excellence across the globe,” they conclude. “When science in China and other fast-developing countries improves, it greatly expands the scale of science and thus speeds up scientific discoveries, benefitting the entire human race.”
One is linear and the other circular, one underground and the other outside, one opened just 12 months ago, the other has been operating for years – but together they comprise one of the world’s largest science facilities. Their names are European XFEL and DESY, and they are located not far from the famous harbour of Hamburg in Germany.
The European XFEL is a collaboration between 11 countries. The acronym stands for X-Ray Free-Electron Laser. It is the world’s largest and most powerful device able to generate ultrashort laser X-ray light flashes.
X-rays are electromagnetic waves, similar to radio waves, microwaves, or visible light, but with a much shorter wavelength.
Conventional optical microscopes cannot see objects smaller than the wavelength of visible light, but X-rays detectors exploit their shorter wavelengths to “see” matter down to an molecular level of detail.
Scientists classify X-rays in two sub-classes: “soft”, which are less energetic and with wavelengths comparable to large molecules, and “hard”, with even shorter wavelengths comparable to small molecules and even atoms.
What are Free-Electron Lasers, or ‘FELs’? In conventional lasers light is emitted by the electrons of excited atoms. Since these electrons are bound to specific levels of atomic energy, this kind of device can produce light only at pre-determined wavelengths. In FELs, instead, light is emitted by electrons that are stripped from their atomic bounds.
The wavelengths at which they can emit are determined by their velocity, so in principle can produce light at any band in the electromagnetic spectrum.
To generate X-ray flashes, bunches of electrons are first accelerated to high energy and then directed through special arrangements of magnets called “undulators.” The accelerated particles emit radiation that is increasingly amplified until an extremely short and intense X-ray flash is finally created. Undulators can be arranged in several ways to generate flashes with different characteristics.
The European XFEL generates its intense X-ray flashes – typically lasting between a few tens and a few hundreds of femtoseconds, or quadrillionths of second – through high-energy electrons from a super-conducting particle accelerator.
The device comprises a tunnel 3.4 kilometres long and was built over the past seven years by a consortium of 17 research institutions. The project was led by scientists at the Deutsches Elektronen-Synchrotron (DESY), which is located at the other end of it.
The accelerator releases up to 27,000 flashes per second, each with a brilliance that is a billion times higher than that of the pulses made by DESY itself, which was considered the best source for X-ray radiation before the development of free-electron lasers.
This is important, because with its extraordinarily bright, highly energetic and extremely intense X-ray flashes, the European XFEL will yield new insights in a wide range of research fields.
For instance, it will enable scientists to take 3D images of nanoparticles, to produce images of viruses and biomolecules at atomic resolution, and investigate how materials behave under the extreme conditions that reign deep within giant gas planets or stars.
The high number of flashes per second will also make it possible to film chemical reactions in super-slow motion.
There are many fields of application, including photonics, biology, energy research, medicine, pharmacology, chemistry and physics, astrophysics, materials science, electronics, nanotechnology, and environmental research into artificial photosynthesis.
The “first lasing”, or the first laser light, was produced at the beginning of May 2017, delivering one X-ray flash per second. The European XFEL started user operations in September the same year, attracting research teams from all over the world.
Scientists from Turkey, the US and the UK have developed a material that might provide the first practical thermal camouflage.
Coskun Kocabas and his team from Bilkent University in Turkey, the Massachusetts Institute of Technology, and the University of Manchester reveal their research in the journal Nano Letters.
Thermal camouflage might be the last word in tactical concealment. You’ve seen it in a hundred military thrillers and action movies – the tense chase or battle scene shown entirely in the distinct green night-time vision of infrared goggles. Sometimes the same effect is achieved with something more like the product of magnetic resonance imaging (MRI) – all cool blues and greens, with globs of dramatic yellows and reds to show the enemy just waiting to step into your crosshairs.
The bad guy, or bad thing, in the Predator series detects its prey by seeing its heat signature.
20th Century Fox
But where nature gives us plenty of guidance about how to hide in the visible spectrum – think of the grey, brown and green of army fatigues – there’s a dead giveaway when it comes to seeing in a different light. Warm-blooded bodies like ours give off heat that lights us up like a proverbial Christmas tree on thermal imagining.
Scientists have long tried to develop thermal camouflage – the military and law enforcement applications are obvious – but so far there have been too many practical stumbling blocks.
Any system needs to adapt to whatever background temperature it’s trying to match. It needs the ability to be built into or attached to tough materials like Kevlar. But most of all it has to be quick acting – showing up against the background for even a few seconds can be too much.
Kocabas and colleagues created a film comprising multiple ultra-thin layers of graphene and a bottom layer of gold, with non-volatile ionic liquid in between them. When a small current is applied, the ions move up into the graphene layer, cutting down the infrared radiation the surface would normally emit.
Because it’s thin, light and flexible the film can be applied to any number of surfaces, including clothing.
Tests have successfully camouflaged a hand owned by a subject wearing a covering of the material, and others have shown it to be indistinguishable from its surroundings in a variety of ambient temperatures.
When the reboot of the classic alien monster movie The Predator arrives in September, with its villain’s trademark heat-detecting vision, maybe this time we’ll be ready …
In common usage, the word “chaos” means disorder, but is that so in physics? Not really. Chaos in physics stands for “unpredictable” and refers to physical systems that change their state over time.
A physical system is simply a slice of universe that we decide to consider as somehow separable from its surrounding environment. Sometimes we assume the collective effect of the system’s surroundings, but more often we prefer to assume that the system is isolated. Does true isolation exist? No: it’s artificial.
For example, a crystal is traditionally defined as a solid possessing an ordered structure that is infinitely periodic in the three spatial dimensions. Even assuming that such a “perfectly ordered” system could exist (and it cannot), such a system would still have a finite size in reality.
It is easy to realise that such perfect structures are therefore more imaginary than real: crystals, as they are found in nature, are not only of a finite size, but also possess many defects and are far from perfect.
Luckily for us all, perfection is subjective, and in physics a crystal is usually described by a set of rules, commonly annotated as equations, defining a set of symmetry operations that can be repeated recursively on a set of points representing atomic centres. These unfold into an infinite, 3D, periodic structure.
These rules are arbitrarily defined as “perfect” simply because they describe infinitely self-repeating patterns. Therefore, any real structure is, by comparison, very imperfect.
How can we predict the discontinuities and irregularities of a real structure if our model does not allow for defects? We cannot. The system we describe could therefore be labelled as “chaotic”, because it is unpredictable using those “perfect” rules.
We can thus conclude that the issue of chaos has nothing to do with reality, and a lot to with its human interpretation.
Such “imperfections” also give rise to an interesting phenomenon: the fractal geometry of nature, which is also the title of a book written by a famous mathematician who studied these ubiquitous structures: Benoit Mandelbrot.
What are fractals? Fractals are technically geometric structures with a fractional dimension, for example 2.3. To understand what I just wrote, however, I’ll give you an example: suppose you are told to trace a straight line. We know from elementary school that straight lines are just an infinite set of points lying in one dimension, and a straight line itself is therefore infinite.
Can you actually draw a straight line? No, but you can possibly draw a segment! A segment is an infinite set of points delimited by two extreme points. If for a straight line you need a whole dimension to trace it all, for a segment you will certainly need less than that!
In other words, you will need a fraction between 0 and 1 to trace it: therefore a segment constitutes a very simple, yet fractal, geometry! Can you predict, using the equation of a straight line, y = a * x + b, all possible segments that lie on a single dimension? Yes, but such an equation would generate an infinite, uncountable, uncomputable and therefore inherently unorderable set of values for the coordinates of the segment extremes.
We can now confidently state that nature seems fractal, but is that truly so? One may argue that the answer to this question has more to do with philosophy than physics, and in a way that would be correct. But what if fractals are just an emergent property our innate inability to grasp infinity?
A new theory could help engineers to create materials for use as smart textiles, artificial tissue and even the aerodynamic contours in jet turbines – by knitting them.
The oldest known example of knitting is from Egypt over a thousand years ago, yet until now scientists did not fully understand how a non-stretchy yarn could be entangled into such a stretchy fabric.
The new study, published in the journal Physical Review X, boils knitwear properties down to three parameters – yarn bendiness, the length of yarn, and the number of crossing points in each stitch.
The simplicity of the model will enable scientists to begin designing knitted materials with custom shapes and properties, says lead author of the paper, Samuel Poincloux from the Ecole Normale Superiore in Paris, France.
“If you understand it well, you can tune your structure to have the properties that you want,” he says.
Poincloux’s analysis found that the primary source of a knitted fabric’s stretchiness came from the loop created as the yarn in one row of stitches weaves through the row above and the row below. When the fabric is pulled, it is able to stretch because the loops become distorted. The energy to do so comes from bending the yarn.
The way the loop can stretch is also limited by the number of times the yarn crosses with neighbouring stitches, and the total length of the yarn in the fabric.
The analysis found yarn bendiness, crossing points and total yarn length were enough parameters to accurately deduce the properties of the material. Poincloux says this should provide a theoretical basis for material designers looking a much more efficient way to create new materials, compared to the current trial and error methods used in industry.
“When I discussed with industry they said they do not have a good fundamental bottom-up design model,” Poincloux explains.
The work originated when Pouncloux’s PhD supervisor, Frédérick Lechenault, watched his wife knit clothes for their unborn child. Lechenault marvelled at the way she could create three-dimensional shapes, such as booties, that would return to their shape even after being stretched significantly.
Lechenault was at the time studying origami, investigating how structure created by folds could alter the properties of a paper object. The one-dimensional nature of yarn suggested the challenge would not be too complex, so he set Poincloux to work analysing knitting stitches.
To get his head around the task Poincloux visited the knitting workshop at the École Nationale Supérieure des Arts Décoratifs across the road from his lab, and learned some stitches.
“Discussing with artists and designers is very interesting, they have a different point of view and a very deep knowledge of what you can do, which can inspire you to design new stuff,” Poincloux says.
“Knits have interesting three-dimensional shapes that appear naturally by changing the stitch pattern. The composite industry can use knitting because it’s quite stretchable and they can fit it to a complex shape, for example in an aeroplane engine turbine.”
In 1919, British astronomer Sir Arthur Eddington traveled to the West African island of Príncipe to take measurements of starlight passing so close to the sun it could only be seen during a total eclipse. His goal was to test a young German physicist’s prediction that the gravity of the sun warps nearby spacetime, thereby bending any light passing.
Eddington’s results confirmed the theory, propelling Albert Einstein to international fame. But it is only now, 99 years later, that astrophysicists have found a way to test Einstein’s theory of gravity on a larger scale, verifying that what works for stars and planets also works for galaxies. In the process, they have improved the case for a mysterious substance known as dark matter and its even more enigmatic cousin, dark energy.
The new experiment used a galaxy known as ESO 325-G004, 450 million light years away in the constellation Centaurus.
ESO 325-G004 happens to lie almost directly in our line of sight to a much more distant galaxy, billions of light years behind it. This positioning means that light coming from the more distant galaxy is bent by the warped spacetime produced by the gravity of ESO 325-G004.
That produces multiple images of the distant galaxy on all sides of ESO 325-G004, in a feature known as an Einstein ring.
“The radius of the ring depends on how much spacetime is curved by the foreground galaxy,” says Thomas Collett, an astrophysicist from the University of Portsmouth, UK.
“This is exactly the sort of thing Eddington did back in 1919,” he explains.
“The difference is that instead of doing it for the mass of a single star and measuring the curvature a few thousand miles away from the star, we’ve done this with an entire galaxy and are measuring the curvature 6000 light years away.”
Galactic-scale spacetime curvature measurements have been done by other methods, but none this precisely. That’s been a concern to astrophysicists, because there are alternative theories of gravity that, at galactic scales, differ minutely enough from Einstein’s that they have never been ruled out.
“We’re more than twice as precise,” Collett says.
What allowed the increased precision, he says, is that ESO 325-G004 is close enough that new instruments on the European Southern Observatory’s Very Large Telescope, in Chile, allowed his team to map the speeds of stars within it to a spatial resolution of less than 500 light years. From that, he says, it’s possible to measure the mass of the galaxy, and thus calculate the diameter of the Einstein ring if Einstein’s theory is correct.
And just as Eddington found back in 1919 for the gravity of a single star, our sun, their results matched Einstein’s predictions.
That’s important, Collett says, because it plays into the search for dark matter and dark energy.
Dark matter is an invisible substance, so far detected only by its gravity, which appears to comprise more than 80% of the mass of the universe. Dark energy is an even less understood force that appears to be working against gravity to accelerate the rate at which the universe is expanding.
Neither has ever been detected by anything other than these effects — a failure that has led some physicists to suggest that perhaps they don’t actually exist. Perhaps, these physicists say, the problem is with Einstein’s theory of gravity.
That idea, Collett says, has led to “modified gravity theories” that “explain away” things like dark matter and dark energy by predicting that gravity behaves differently at galactic or intergalactic scales than it does in our own solar system.
The findings from ESO 325-G004 say otherwise.
“Our result is showing that if there are deviations from general relativity, they can’t have much effect on the scale of individual galaxies,” Collett says.
The findings also reinforce the case for dark energy.
“It was tempting to have theories of gravity that explain [the accelerating expansion of the Universe] without dark energy,” Collett explains.
“This result says you probably do need dark energy.” Though, he adds, “it doesn’t tell us what it is.”
Brad Tucker, an astrophysicist and cosmologist at Australian National University, Canberra, who was not part of the study team, sees the new study as an important finding.
“They say gravity is the law, so you have to obey it,” he quips. “But it is still a theory that needs to be tested.”
Tests at small scales, within our solar system, have confirmed Einstein’s version of gravity to “astounding accuracy,” he says, but tests like this, on larger scales “have been hard, until now”.
And, he notes, the new study isn’t just an important confirmation of Einstein’s theory: “It is also an independent check on whether our understanding of the existence of dark matter and dark energy is correct.”
The new study is in the journal Science.
As a budding young American scholar, Thomas Kuhn, who was born in Cincinnati, Ohio, on July 18, 1922, studied physics at Harvard University, earning his first degree in 1943 and a master’s in 1946. But he found his true path in 1949, taking his PhD from Harvard in the history of science. His first book, The Copernican Revolution, from 1957, examines the development of the heliocentric theory of the solar system during the Renaissance.
Kuhn’s interests in science and history led him into philosophy and to his second book, in 1962, The Structure of Scientific Revolutions, which the Stanford Encyclopaedia of Philosophy calls “one of the most cited academic books of all time”.
In the book, Kuhn formulated his concept of “paradigm shift”. The Stanford authors explain how, through his study of science history, Kuhn had come to distrust the traditional perspective that science develops “by the addition of new truths to the stock of old truths, or the increasing approximation of theories to the truth, and in the odd case, the correction of past errors … but progress itself is guaranteed by the scientific method.”
Instead, he says science does not develop according to a set pattern, but has alternating “normal” and “revolutionary” phases. The revolutionary phases are not merely periods of accelerated progress, but differ qualitatively from normal science.
In its entry on Kuhn, the Encyclopaedia Britannica explains his concept: “Scientific research and thought are defined by ‘paradigms’, or conceptual world-views, that consist of formal theories, classic experiments, and trusted methods.
“Scientists typically accept a prevailing paradigm and try to extend its scope by refining theories, explaining puzzling data, and establishing more precise measures of standards and phenomena. Eventually, however, their efforts may generate insoluble theoretical problems or experimental anomalies that expose a paradigm’s inadequacies or contradict it altogether.
“This accumulation of difficulties triggers a crisis that can only be resolved by an intellectual revolution that replaces an old paradigm with a new one. The overthrow of Ptolemaic cosmology by Copernican heliocentrism, and the displacement of Newtonian mechanics by quantum physics and general relativity, are both examples of major paradigm shifts.”
The idea transformed scientific debate and modelling. To honour its influence, and that of its creator, the American Chemical Society each presents the “Thomas Kuhn Paradigm Shift Award” to a researcher who promulgates ideas that best challenge the status quo.
Kuhn held professorial tenure at the Massachusetts Institute of Technology from 1979 until his retirement in 1991. He died on June 17, 1996, after a long battle with lung cancer.
The image above shows the interior of the liquid argon time projection chamber in one of the two protoDUNE neutrino detectors under construction at CERN, the European particle physics research centre on the French-Swiss border. This 10-metre-a-side refrigerated cube will hold an 800-tonne tank of the noble gas argon chilled to a liquid. When neutrinos collide with the argon atoms, they will produce tiny flashes of light that give away the paths.
The protoDUNE detectors are prototypes of the technology to be used in the Deep Underground Neutrino Experiment (DUNE), which is currently being built in a cavern 1.6 kilometres below ground in South Dakota, USA.
Researchers from 32 countries are working on the DUNE project, which may reveal more detail than ever before about the workings of the elusive neutrino.
Friction has memory, a team at Harvard University in the US has discovered.
Researchers found that the force which keeps your coffee cup from sliding about on the table is a changeable phenomenon. Mostly, friction increases the longer your cup sits still, but sometimes it can decrease – for example when its load lightens rapidly as the coffee is consumed.
Most surprisingly, the team found that, given the right circumstances, friction can even decrease for a while, then change direction and start to increase again – and that it all depends on what’s happened before.
This memory effect reveals friction to be a much more complex interaction than previously realised. Understanding it could change the way we think about a huge range of material behaviour – from earthquakes caused by sliding tectonic plates down to the precision operation of industrial machinery.
Sam Dillavou and colleagues stacked up two polymer blocks and set up a spring to push down on top of them. They then partially released the spring tension and measured the friction.
“Real surfaces are rough at small scales – think zooming in until our flat surface looks like a mountain range,” says Dillavou.
“When you press two surfaces together, only a small portion of the apparent area is in true contact, typically less than 1%.
“This real area of contact is what dictates the resistance to sliding of the interface … two surfaces in static contact are not truly static; these small areas of real contact are under enormous pressures and deform and grow over time.”
To understand the way surfaces mould to one another, the scientists conceptualised the two blocks covered in tiny springs that stuck up at different heights. As they forced the blocks together, the tallest springs compressed, allowing some of the shorter ones to come into contact.
When the force was lessened, the springs slowly decompressed and pushed the blocks apart slightly, which was why the friction decreased – sometimes for hours – before starting to increase again.
It’s a model that’s successfully been used to understand disordered materials, such as glass, which do not have a regular crystal structure, Dillavou adds.
“We managed to quantify these behaviours using a theory that describes numerous glassy disordered systems such as crumpled paper and elastic foams,” he explains.
To quantify the actual contact area between the polymer blocks, the team shone a light into the lower one at a low angle. Where the surfaces touched, light was able to pass into the upper block. The brightness of light seen in the upper block gave a good indication of the contact area.
While the friction force was mostly correlated with the surface area, the experiment held another surprise, says Dillavou.
“We showed that under the right conditions, it is possible to have the real area of contact shrink while the frictional strength is growing,” he said.
“We believe we reconciled this apparent paradox by showing that frictional interfaces evolve non-homogeneously, and that certain regions become more important than others, exerting an oversized influence on the frictional strength.”
The research is published in the journal Physical Review Letters.
The human race is doomed never to find extraterrestrial life, because we are about to wipe it all out in a manner that is unintentional, yet horribly unavoidable.
That’s the conclusion reached by physicist Alexander Berezin from the National Research University of Electronic Technology (MIET) in Russia, in a new and admirably parsimonious solution to Fermi’s Paradox.
Fermi’s Paradox is named after Enrico Fermi (1901–1954), an Italian-American physicist who created the world’s first nuclear reactor. Along with fellow physicist Michael Hart, he did some back-of-the-envelope calculations and raised a disturbing and apparently intractable issue at the heart of the subject of extraterrestrial life – or, rather, its absence.
There are, the pair stated, billions of stars in the Milky Way – never mind the rest of the universe – that are similar to the Sun, with some of them considerably older. Given this, there is an extremely high probability that Earth-like planets also exist in abundance and some of these, therefore, must host intelligent life.
So far, so logical. But, the scientists continued, if this is so, then some of the intelligent lifeforms must have developed interstellar travel – because humans are working on it, and humans, in this set-up, can’t be anything special.
Ergo, even at a slow pace, some of these civilisations should have completely crossed the Milky Way by now.
Ergo, where is everybody?
There has been no shortage of attempts to resolve the problem. Possible explanations have included the idea that we humans aren’t using the correct search techniques so keep missing ET; ET has worked out ways of hiding from our searching stares; the universe is so old that whole ET civilisations have risen and eventually gone extinct without overlapping; and the face-palm-inducing suggestion that all advanced civilisations are so busy listening for the signals of others than none have remembered to actually broadcast one.
Berezin, however, dismisses these arguments because they “invoke multiple rather controversial assumptions”.
In a paper published on the academic pre-print site Arxiv (and, thus, awaiting peer review) he advances his own tightly reasoned solution to the matter. It is not, however, something he does so much with pride as with deep sorrow.
“I argue that the Paradox has a trivial solution, requiring no controversial assumptions, which is rarely suggested or discussed,” he writes.
“However, that solution would be hard to accept, as it predicts a future for our own civilisation that is even worse than extinction.”
Just let that sink in for a moment: worse than extinction.
To reach this rather depressing conclusion, the physicist ruthlessly pares away most of the assumptions that have been used in the past to define possible extra-terrestrial civilisations. They need to be “substrate-invariant”, he says, meaning they could be biological, robotic, or distributed planet-scale minds.
All that matters is that they fulfil just three criteria: they reproduce, they grow, and they come close enough for us to detect them. That’s all well and good, but thus far there is no sign of any ET doing that. The paradox still stands.
And this is where things turn a bit ugly.
“What if,” Berezin proposes, “the first life that reaches interstellar travel capability necessarily eradicates all competition to fuel its own expansion?”
This is not, he quickly adds, to imply that ET is warlike or cruel.
“I am not suggesting that a highly developed civilisation would consciously wipe out other lifeforms,’ he writes.
“Most likely, they simply won’t notice, the same way a construction crew demolishes an anthill to build real estate because they lack incentive to protect it.”
Readers familiar with Douglas Adams’ novel Hitchhiker’s Guide To The Galaxy might be thinking at this point of the incident at the start of the story in which an ET species known as Vogons demolish Earth in order to clear a path for an intergalactic highway, expressing neither care nor interest for all the life upon it.
The comparison, in Berezin’s model, is not inapt – except for one small, but critical, difference. This time, we’re the Vogons.
To draw this conclusion, the physicist deploys one of the more troubling, but central, tenets of cosmology, known as the anthropic principle. This idea, first formulated in 1974 by theoretical physicist Brandon Carter from the UK’s Cambridge University, holds that conditions within the universe “must be restricted by the conditions necessary for our presence as observers”.
It’s a powerful concept – albeit one that challenges the equally important Copernican principle that holds Earth and humanity to be nowhere and nothing special – that even Stephen Hawking entertained.
Applied to the Fermi Paradox by Berezin, however, it becomes an instrument of cosmic damnation. There is, he concludes, only one reason why ET, in all the stellar multitude, has not so far been seen.
“We are the first to arrive at the stage,” he says. “And, most likely, will be the last to leave.”
In other words, we are the paradox resolution made manifest. It is us, our species, who will spread through the universe, demolishing anthills along the way. Avoiding this fate, suggests Berezin is impossible, because it will “require the existence of forces far stronger than the free will of individuals”.
At the conclusion of his paper, the author adds that he hopes he is wrong in his prediction.
“The only way to find out is to continue exploring the Universe and searching for alien life,” he adds – although many of his readers, perhaps, might conclude that this is not now the wisest course of action.
Ruthenium, the element that sits at number 44 on the periodic table, has been discovered to be magnetic at room temperature – becoming only the fourth element shown to be so.
The discovery, made by a team of scientists led by Patrick Quarterman of the US National Institute of Standards and Technology (NIST) paves the way for a new generation of sensors, computer devices and spintronic applications.
The first element known to be magnetic at room temperature, iron, was discovered many thousands of years ago. Two more – cobalt and nickel – were added more recently. Another, the rare earth gadolinium, is also magnetic, but only when its temperature is raised some eight degrees Celsius above room temperature.
Quarterman and his colleagues reveal ruthenium’s properties in a paper in the journal Nature Communications.
To make the discovery the team had to first find a way to “grow” the element into a structure that forced it into a magnetic phase. The correct form turned out to be an ultra-thin film – making it potentially very useful for next-gen electronics applications, many of which require manipulation on an atomic scale.
“This is an exciting but hard problem,” says co-author Jian-Ping Wang from the University of Minnesota in the US.
“It took us about two years to find a right way to grow this material and validate it. This work will trigger the magnetic research community to look into fundamental aspects of magnetism for many well-known elements.”
A simple experiment in which a tiny drum is beaten with a drumstick made of light could make quantum mechanics visible to the naked eye, elevating it from its usual mysterious atomic-scale behaviour.
Using a mechanism from a mobile phone and an off-the-shelf silicon nitride membrane designed for electron microscopes, a team of researchers at University of Queensland in Australia deployed a 20 year-old laser to study the interaction between light and vibrations.
The quantum effect the team was looking to demonstrate involved the membrane – the drum head – vibrating and being still at the same time – a quantum condition known as a superposition of states.
Superpositions have been observed with single atoms and photons, but the team’s new technique potentially allows them to extend that capability enormously. The 1.7 millimetre square membrane comprises a million billion atoms.
The ability to understand and control the quantum behaviour of objects made of so many atoms could enable the development of incredibly sensitive detectors, said team member Till Weinhold.
“This could assist in the detection of gravitational waves with higher sensitivity, or for stabilisation in space flight,” he says.
Weinhold hopes the new method will help unlock more secrets of quantum physics.
“I think it should give us a new understanding of how our world is quantum mechanical, yet we don’t observe it,” he explains.
“We should be able to understand that line between the classical world that we live in and the quantum world that underlies it much better by pushing macroscopic objects into the quantum regime.
“We should get a much better understanding of why nature behaves the way it does.”
While the team’s method, reported in the New Journal of Physics, does not achieve enough sensitivity to observe quantum behaviour, they are confident the method will inspire other research groups to use the approach to study the quantum nature of light interacting with vibrating objects.
The team’s method was adapted from previous experiments that used elaborate set-ups including high vacuums, and dropping the temperature to close to absolute zero (minus-273 degrees Celsius) to create hybrid quantum states.
In this work, the team simply observed many reflections from the drum, and selected the rare events which became superpositions.
They identified photons that bounced back in a superposition state by splitting the laser beam in two, directing one onto a static reflector and the other instead onto the vibrator – a set-up known as an interferometer.
By recombining the halves of the beam as they returned they could tell how it had interacted with the vibration.
If the beam came back from the drum in a hybrid state it reflected into a detector, which told them they’d found what they were looking for.
The team’s initial prototype had too much noise to fully confirm they had created quantum behavior, because the energy imparted to the drum by the phone vibrator was equivalent to heating it to millions of degrees, a similar temperature to the centre of the sun.
To reduce the noise enough to see accurate results, the researchers will need to perform the experiment in a cooled environment and in a vacuum, but not at the level of previous experiments.
“Depending on the system, it might be a factor of 20 warmer – a million times easier to achieve,” Weinhold says.
The pressure inside a single proton is enormously greater than that found inside a neutron star, according to the first measurement of the internal mechanical properties of subatomic particles.
In a study published in the journal Nature, a team headed by nuclear physicist Volker Burkert of the Thomas Jefferson National Accelerator Facility in the US reports that quarks, the building blocks of protons, are subjected to a pressure of 100 decillion pascals at the centre of the particle – about 10 times the pressure at the heart of a neutron star.
Pressure inside the particle, however, is not uniform, and drops off as the distance from the centre increases.
“We found an extremely high outward-directed pressure from the centre of the proton, and a much lower and more extended inward-directed pressure near the proton’s periphery,” explains Burkert.
A proton is made up of three quarks, bound together by what physicists call the strong force. It is one of four fundamental forces that condition the universe. Two of these – electromagnetism and gravity – produce effects that govern macro-scale interactions. The other two, known as strong and weak, operate on a subatomic scale and determine nuclear reactions.
Obtaining detailed information about the internal mechanics of a subatomic particle has long been thought impossible, but Buckert and colleagues managed to do so, ironically enough, by combining modelling systems that rely on electromagnetism and gravity.
The researchers paired two theoretical frameworks to obtain their data.
The first concerned the distribution of partons – a term coined by physicist Richard Feynman to describe a method of modelling point-like entities inside protons and neutrons, namely quarks. Parton modelling allows researchers to produce a three-dimensional model of a proton as probed by the electromagnetic force.
The second framework involved gravitational form factors, which describe the scattering of subatomic particles by the classical gravitational field.
Combining the two approaches and applying the result to data obtained by using electron beams produced by a continuous beam accelerator at the Thomas Jefferson facility yielded world-first information.
“This is the beauty of it,” says co-author Latifa Elouadrhiri. “You have this map that you think you will never get. But here we are, filling it in with this electromagnetic probe.”
The findings are likely to generate great interest among other physicists.
“We are providing a way of visualising the magnitude and distribution of the strong force inside the proton,” says Burkert.
“This opens up an entirely new direction in nuclear and particle physics that can be explored in the future.”
A new book of essays by the late theoretical physicist Stephen Hawking will be published in October this year, adding to the public archive of the great scientist’s work – and almost certainly topping the bestseller lists in time for Christmas.
In a joint announcement, Hawking’s long-time North American publisher, Bantam, and his estate, said the new book will be called Brief Answers to Brief Questions.
Work on the book was already underway before Hawking’s death in March this year at the age of 76.
According to the publisher, the new volume will comprise “a selection of his most profound, accessible, and timely reflections from his personal archive”.
It will be organised into four sections, tackling the questions: Why are we here? Will we survive? Will technology save or destroy us? Can we survive?
Hawking’s agent said the book arose from thousands of letters sent to the physicist from people seeking answers to questions of science. Writing and collating responses to some of these was one of the last projects he concerned himself with.
“Communication was so important to our father in his lifetime and we see this book as part of his legacy, bringing together his thoughts, humour, theories and writing into one beautiful edition,” said his daughter, Lucy Hawking.
If you are sick of waiting for the kettle to boil for your morning coffee then a new technique to heat water with a powerful X-ray laser could be for you.
Scientists developed the technique at the Linac Coherent Light Source X-ray laser in California, US, and used it to heat water to 100,000 degrees Celsius – in just 75 femtoseconds, or 75 millionths of a billionth of a second.
The techniques overshot the pressure of steam required to produce a good espresso somewhat, and instead turned the water into a dense electrically charged state known as plasma, resembling some extreme cosmic environments.
“It has similar characteristics as some plasmas in the sun and the gas giant Jupiter, but has a lower density. Meanwhile, it is hotter than Earth’s core,” says researcher Olof Jönsson from Uppsala University in Sweden.
The team’s technique, which was written up in the journal Proceedings of the National Academy of Sciences may help scientists use X-rays to study the structure of liquids.
X-ray diffraction is a common technique for looking at the structure of crystals and other solids, but this experiment shows that the approach will need to be altered when applied to liquids.
“Any sample that you put into the X-ray beam will be destroyed in the way that we observed,” says co-author Kenneth Beyerlein from the Centre for Free-Electron Laser Science (CFEL) in Hamburg, Germany.
“If you analyse anything that is not a crystal, you have to consider this.”
The team’s measurements showed that the water molecules barely responded to the X-ray laser for the first 25 femtoseconds, but 50 femtoseconds they were shedding electrons and turning to plasma.
The change to an electrically charged gas is the key to the sudden heating, which is quite different to what happens in the average electric jug, explains Carl Caleman from the Deutsches Elektronen-Synchrotron, also in Hamburg.
“The energetic X-rays punch electrons out of the water molecules, thereby destroying the balance of electric charges,” he says. “So, suddenly the atoms feel a strong repulsive force and start to move violently.”
Despite being common on Earth, water has unusual characteristics that make it interesting to study, says Jönsson.
“Water really is an odd liquid,” he notes, “and if it weren’t for its peculiar characteristics, many things on Earth wouldn’t be as they are, particularly life.”
An Australian-born scientist who was part of the Manhattan Project risked his liberty to tip off the British that the United States was planning to exercise complete post-war control over nuclear weapons, new research reveals.
Physicist Mark Oliphant was born in the Australian city of Adelaide in 1901. After graduating from university, he migrated to Britain and studied at Cambridge University, before moving to the University of Birmingham.
There, after working on the then top secret development of radar, in 1940 he developed an interest in uranium – specifically an isotope known as uranium-235 – and with colleagues deduced its value as an explosive.
By 1941 he was advising on the British war effort and urging a US-UK collaboration to construct an atomic weapon. Displeased by the lukewarm response emanating from American channels, he went so far as to fly across the Atlantic in a bomber aircraft (on the pretext of conducting talks about radar) and met up with American scientists and policy-makers, eventually convincing them that a nuclear bomb was a good, and urgent, idea.
Soon after, the Manhattan Project was formed, but not before senior UK and US diplomats met in Canada and signed what became known as the Quebec Agreement, which committed both countries to sharing research and resources on atomic weapons research.
In the journal Historical Records of Australian Science, Darren Holden from the University of Notre Dame in Fremantle, Australia, reveals that Oliphant several times came up against the tight security protocols in place to safeguard the project against leaks, and earned the unwanted attention of the FBI in the process.
That Oliphant – who went on in later years to win many awards and honours – was ill-amused by the idea of research “compartmentalisation” that was the bedrock of the US-Britain project has been well documented.
Holden, however, reveals documents that indicate on one occasion he risked leaking information of such potential magnitude that the FBI might well have taken drastic action against him had he been discovered. Certainly, it imperilled the cooperative relationship between the two nations regarding weapons research – by revealing that the cooperation was, in fact, only going one way.
The researcher details a 1944 meeting at Berkeley, California, between Oliphant, Nobel laureate Ernest Lawrence and the military man in charge of the Manhattan Project, General Leslie Groves. At the fiery meeting, Groves – usually a very discreet operator – let it slip that there were some parts of the research that were not being shared with the British.
An imminent visit to Washington DC by one of Winston Churchill’s closest advisors, Lord Cherwell, was to be an exercise in misdirection. Further, Groves revealed, after the end of World War II the US intended to ensure that nuclear weapons manufacture and the storage of nuclear material would happen only in the central portions of North America.
Secretly appalled by this, Oliphant – Holden plausibly suggests – stopped work almost immediately and then travelled, perhaps for three days on a train, to the British embassy in Washington DC, and there sent a confidential memo to the British authorities, blowing the whistle on the US intentions.
His message caused a stir – albeit a secret one – in the upper echelons of British power, with senior figures including Lord Cherwell and the then-Chancellor of the Exchequer, Sir John Anderson, becoming involved. Much of their efforts were aimed at encouraging discretion in Oliphant himself, who was agitating for the revival of a mothballed British atomic bomb project – one with much less compartmentalisation.
In telling the story, Holden is ambivalent in his conclusions.
“Was Oliphant’s report designed purely to alert the British of United States plans to monopolise the new technology?” he asks.
He concedes that the physicist was aware that the information blurted out by Groves carried serious implications for Britain’s ability to defend itself in the future, and that he thus ensured the UK government was aware of the situation.
Holden also accords him an even higher, more noble motivation, that of “a duty-bound requirement to serve the science and protect scientific freedoms — that knowledge be the preserve of no single state or person.”
However, he adds that Oliphant also provided secret information to the Australian government.
“This suggests a different motivation at play,” he notes, “in that Oliphant was thinking, possibly selfishly, about his own post-war research.”
In time, Oliphant’s warning proved prescient. In 1946, the US essentially ended the Quebec Agreement by legislating that some atomic research would not be shared.
“What could not be contained, however,” he concludes, “was the knowledge held in the minds of all the scientists. The Manhattan Project was a gathering of a critical mass of multinational scientists, and after the war it exploded and radiated knowledge beyond the borders and insular laws of the USA and into a hopeful world.”
Humans are noisy swimmers. But just how much noise do we make thrashing around in the water, and is there enough of it to disturb marine wildlife?
To find the answer, a team led by Christine Erbe, director of the Centre for Marine Science & Technology at Curtin University, Perth, Western Australia, outfitted an Olympic-sized pool with acoustical sensors and had people practice swimming, diving, kayaking, and scuba diving in it.
What they found was both reassuring for ocean swimmers not wanting to leave environmental chaos in their wake, and intriguing for coaches looking for new ways to improve in-the-water technique.
Most of the noise, Erbe said last week at a meeting of the Acoustical Society of America in Minneapolis, Minnesota, comes from generating and pushing around clouds of bubbles as we kick, dive, or move our arms. It’s loud enough, she says, to be heard from tens to hundreds of meters away, but almost certainly not loud enough to disturb distant wildlife.
In terms of noise management in the oceans, she says, even the noisiest humans are small fry compared to the big emitters like ocean vessels, offshore construction, deep-sea mining, oil and gas development, or even jet skis and motor boats — “all the things that are obviously loud in the water”.
The experiment was simple and inexpensive, since Erbe’s lab already had all the needed equipment.
“We simply threw the gear we had in the pool,” she laughs. “Everyone swam.”
Sometimes, even scientists want to have fun in the water. Especially in Australia.
But the results were also quite interesting, especially when the scientists compared the recorded sounds to videos of what produced them.
In swimming, Erbe says, “it’s all related to bubbles created when [we] pierce the water with our heads, hands, or fins. In scuba it’s the exhalations and inhalations. Kayak sound was interesting because every time the paddle comes out of the water it dribbles and you get this tinkle sound.”
There was also a difference between swimming strokes (breaststroke was quieter than freestyle) and, most interestingly, between individual swimmers. Once you’d familiarised yourself with a swimmer’s acoustical signature, she says, you could listen blindfolded and know who was swimming and at what speed.
“Some of us use much more force in swimming than others,” she explains. “For example, some swimmers exerted more energy vertically downward — pushing huge and noisy bubble clouds underwater, while other swimmers used their energy more in a forward-propelling sense and created much less bubble noise vertically below them.”
One possible application is to seek a better understanding of shark attacks. “We know sharks don’t go hunting for humans,” Erbe says. “There is a hypothesis that they mistake humans for prey. So one of the things one could look at is how do the signatures of humans compare to the natural sounds of prey for sharks.”
Another application might be security for swimming pools. “You could set up equipment that automatically listens and sends an alarm,” she says.
But an even more interesting application, she says, is as a training device for swimmers and coaches.
“Those who create the most bubbles are not the best swimmers,” she says. “They waste energy. The best swimmers are certainly also the quiet ones. That might be good for performance feedback.”
Furthermore, the needed equipment isn’t all that expensive. “You can do it with a GoPro,” Erbe says. Even a good hydrophone only costs a few hundred dollars.
Andrew Sheaff, assistant swimming and diving coach at the University of Virginia in US, agrees that acoustics might help coaches identify previously overlooked nuances in swimming technique.
“I’m always looking for ways to identify the characteristics for fast swimming,” he says. “Adding an acoustic element would further expand our knowledge of what excellent swimming looks like…we [might] gain insight that was previously missed. Once we identify what these characteristics are, we can begin the process of helping swimmers learn them.”