Science Made Simple: What Are Batteries and How Do They Work?

How Lithium Ion Batteries Work

Batteries and similar devices accept, store, and release electricity on demand. Batteries use chemistry, in the form of chemical potential, to store energy, just like many other everyday energy sources. For example, logs store energy in their chemical bonds until burning converts the energy to heat.

Gasoline is stored chemical potential energy until it is converted to mechanical energy in a car engine. Similarly, for batteries to work, electricity must be converted into a chemical potential form before it can be readily stored.

Batteries consist of two electrical terminals called the cathode and the anode, separated by a chemical material called an electrolyte. To accept and release energy, a battery is coupled to an external circuit. Electrons move through the circuit, while simultaneously ions (atoms or molecules with an electric charge) move through the electrolyte.

In a rechargeable battery, electrons and ions can move either direction through the circuit and electrolyte. When the electrons move from the cathode to the anode, they increase the chemical potential energy, thus charging the battery; when they move the other direction, they convert this chemical potential energy to electricity in the circuit and discharge the battery. During charging or discharging, the oppositely charged ions move inside the battery through the electrolyte to balance the charge of the electrons moving through the external circuit and produce a sustainable, rechargeable system. Once charged, the battery can be disconnected from the circuit to store the chemical potential energy for later use as electricity.

Batteries were invented in 1800, but their chemical processes are complex. Scientists are using new tools to better understand the electrical and chemical processes in batteries to produce a new generation of highly efficient, electrical energy storage. For example, they are developing improved materials for the anodes, cathodes, and electrolytes in batteries. Scientists study processes in rechargeable batteries because they do not completely reverse as the battery is charged and discharged. Over time, the lack of a complete reversal can change the chemistry and structure of battery materials, which can reduce battery performance and safety.

Electrical Energy Storage Facts

  • The 2019 Nobel Prize in Chemistry was awarded jointly to John B. Goodenough, M. Stanley Whittingham, and Akira Yoshino “for the development of lithium-ion batteries.”
  • The Electrolyte Genome at JCESR has produced a computational database with more than 26,000 molecules that can be used to calculate key electrolyte properties for new, advanced batteries.

DOE Office of Science & Electrical Energy Storage

Research supported by the DOE Office of Science, Office of Basic Energy Sciences (BES) has yielded significant improvements in electrical energy storage. But we are still far from comprehensive solutions for next-generation energy storage using brand-new materials that can dramatically improve how much energy a battery can store. This storage is critical to integrating renewable energy sources into our electricity supply. Because improving battery technology is essential to the widespread use of plug-in electric vehicles, storage is also key to reducing our dependency on petroleum for transportation.

BES supports research by individual scientists and at multi-disciplinary centers. The largest center is the Joint Center for Energy Storage Research (JCESR), a DOE Energy Innovation Hub. This center studies electrochemical materials and phenomena at the atomic and molecular scale and uses computers to help design new materials. This new knowledge will enable scientists to design energy storage that is safer, lasts longer, charges faster, and has greater capacity. As scientists supported by the BES program achieve new advances in battery science, these advances are used by applied researchers and industry to advance applications in transportation, the electricity grid, communication, and security.

Provided by U.S. DEPARTMENT OF ENERGY

Source: SciTechDaily

Exploring Comet Thermal History: Observing a Burnt-Out Comet Covered With Talcum Powder

CONCEPTUAL IMAGE. BY OBSERVING A COMET IN THERMAL INFRARED WAVELENGTHS, THE SAME WAVELENGTHS USED BY NONCONTACT THERMOMETERS, IT IS POSSIBLE TO DETERMINE NOT ONLY ITS CURRENT TEMPERATURE, BUT ALSO THE SURFACE COMPOSITION OF THE NUCLEUS WHICH CONTAINS INFORMATION ABOUT THE THERMAL HISTORY OF THE COMET. CREDIT: KYOTO SANGYO UNIVERSITY
Observing a Comet in Thermal Infrared Wavelengths

The world’s first ground-based observations of the bare nucleus of a comet nearing the end of its active life revealed that the nucleus has a diameter of 800 meters and is covered with large grains of phyllosilicate; on Earth large grains of phyllosilicate are commonly available as talcum powder. This discovery provides clues to piece together the history of how this comet evolved into its current burnt-out state.

Comet nuclei are difficult to observe because when they enter the inner Solar System, where they are easy to observe from Earth, they heat up and release gas and dust which form a coma obscuring the nuclei. When Comet P/2016 BA14 (PANSTARRS) was discovered in January 2016 it was first mistaken for an asteroid, but subsequent observations revealed weak cometary activity. It is believed that after many trips through the inner Solar System, this comet has burnt off almost all of its ice and is now nearing the end of its cometary life.

On March 22, 2016, this comet passed within 3.6 million kilometers of Earth, only 9 times farther than the Moon. A team of astronomers from the National Astronomical Observatory of Japan (NAOJ) and Koyama Astronomical Observatory of Kyoto Sangyo University used this unique opportunity to observe the comet with the Subaru Telescope about 30 hours before its closest approach to Earth. They successfully observed the nucleus with minimal interference from dust grains in the coma. Previously, the surface composition of a cometary nucleus has only been observed by a few “in-situ” observations by space missions.

Because the team observed thermal infrared radiation, the same region of the infrared used by contactless thermometers, they were able to find evidence that the nucleus is 800 meters in diameter and covered with organic molecules and large grains of phyllosilicate. This is the first time hydrous silicate minerals such as talc have been found in a comet. Comparison with laboratory measurements of various minerals revealed that the hydrous silicate minerals on the surface of P/2016 BA14 have been heated to more than about 330 degrees Celsius in the past. Since the surface temperature of P/2016 BA14 cannot reach higher than about 130 degrees Celsius in its current orbit, the comet may have been in an orbit closer to the Sun in the past.

The next question is whether comets are covered with talcum powder from the start or if it develops over time as they burn out. “This result provides us a precious clue to study the evolution of comets.” comments Dr. Takafumi Ootsubo, the lead author of this research, “We believe that further observations of the comet nuclei will enable us to learn more about the evolution of comets.”

The target of this research, P/2016 BA14, is a potential backup target for the Comet Interceptor mission, a comet exploration mission being carried out by the ESA and JAXA.

Reference: “Mid-infrared observations of the nucleus of Comet P/2016 BA14 (PANSTARRS)” by Takafumi Ootsubo, Hideyo Kawakita and Yoshiharu Shinnaka, 15 March 2021, ICARUS.
DOI: 10.1016/j.icarus.2021.114425

Source: SciTechDaily

COVID Search and Destroy: Scientists Design “Nanotraps” to Catch and Clear Coronavirus From Tissue

A SCANNING ELECTRON MICROSCOPE IMAGE OF A NANOTRAP (ORANGE) BINDING A SIMULATED SARS-COV-2 VIRUS (DOTS IN GREEN). SCIENTISTS AT THE UNIVERSITY OF CHICAGO CREATED THESE NANOPARTICLES AS A POTENTIAL TREATMENT FOR COVID-19. CREDIT: IMAGE COURTESY CHEN AND ROSENBERG ET AL.
Nanotrap Binding a Simulated SARS-CoV-2 Virus

Potential COVID-19 treatment pairs nanoparticles with immune system to search and destroy viruses.

Researchers at the University of Chicago have designed a completely novel potential treatment for COVID-19: nanoparticles that capture SARS-CoV-2 viruses within the body and then use the body’s own immune system to destroy them.

These “nanotraps” attract the virus by mimicking the target cells the virus infects. When the virus binds to the nanotraps, the traps then sequester the virus from other cells and target it for destruction by the immune system.

In theory, these nanotraps could also be used on variants of the virus, leading to a potential new way to inhibit the virus going forward. Though the therapy remains in early stages of testing, the researchers envision it could be administered via a nasal spray as a treatment for COVID-19.

The results were published recently in the journal Matter.

“Since the pandemic began, our research team has been developing this new way to treat COVID-19,” said Asst. Prof. Jun Huang of the Pritzker School of Molecular Engineering, whose lab led the research. “We have done rigorous testing to prove that these nanotraps work, and we are excited about their potential.”

Designing the perfect trap

To design the nanotrap, the research team—led by postdoctoral scholar Min Chen and graduate student Jill Rosenberg—looked into the mechanism SARS-CoV-2 uses to bind to cells: a spike-like protein on its surface that binds to a human cell’s ACE2 receptor protein. 

To create a trap that would bind to the virus in the same way, they designed nanoparticles with a high density of ACE2 proteins on their surface. Similarly, they designed other nanoparticles with neutralizing antibodies on their surfaces. (These antibodies are created inside the body when someone is infected and are designed to latch onto the coronavirus in various ways). 

An artist’s concept of the nanotrap in action. The nanotrap is shown with a yellow core, green phospholipid shell, and red functionalized particles to bind the virus (shown in gray, decorated with their infamous spike protein in green). Credit: Image courtesy Chen and Rosenberg et al.

Made of FDA-approved polymers and phospholipids, the nanoparticles are about 500 nanometers in diameter—much smaller than a cell. That means the nanotraps can reach more areas inside the body and more effectively trap the virus. 

Then, to check to make sure the tiny particles looked the way they expected, they partnered with the lab of Assoc. Prof. Bozhi Tian to use electron microscopes to get a good look. “From our imaging, we saw a solid core and a lipid bilayer shell. That’s the essential part because it mimics the cell,” said Tian, who is appointed in the Department of Chemistry.

The researchers tested the safety of the system in a mouse model and found no toxicity. They then tested the nanotraps against a pseudovirus—a less potent model of a virus that doesn’t replicate—in human lung cells in tissue culture plates and found that they completely blocked entry into the cells. 

Once the pseudovirus bound itself to the nanoparticle—which in tests took about 10 minutes after injection—the nanoparticles used a molecule that calls the body’s macrophages to engulf and degrade the nanotrap. Macrophages will generally eat nanoparticles within the body, but the nanotrap molecule speeds up the process. The nanoparticles were cleared and degraded within 48 hours.

The researchers also tested the nanoparticles with a pseudovirus in an ex vivo lung perfusion system—a pair of donated lungs that is kept alive with a ventilator—and found that they completely blocked infection in the lungs.

They also collaborated with researchers at Argonne National Laboratory to test the nanotraps with a live virus (rather than a pseudovirus) in an in vitro system. They found that their system inhibited the virus 10 times better than neutralizing antibodies or soluble ACE2 alone.

A potential future treatment for COVID-19 and beyond

Next the researchers hope to further test the system, including more tests with a live virus and on the many virus variants. 

“That’s what is so powerful about this nanotrap,” Rosenberg said. “It’s easily modulated. We can switch out different antibodies or proteins or target different immune cells, based on what we need with new variants.”

The nanotraps can be stored in a standard freezer and could ultimately be given via an intranasal spray, which would place them directly in the respiratory system and make them most effective.

The nanotraps can be stored in a standard freezer and could ultimately be given via an intranasal spray.

The researchers say it is also possible to serve as a vaccine by optimizing the formulation.

“This nanomaterial engineering approach provides a versatile platform to clear viruses, and paves the way for designing next-generation vaccines and therapeutics,” said co-author and graduate student Jiuyun Shi.

“This is the starting point,” Huang said. “We want to do something to help the world.”

The research involved collaborators across departments, including chemistry, biology, and medicine. Other authors on the paper include Xiaolei Cai, Andy Chao Hsuan Lee, Jiuyun Shi, Mindy Nguyen, Thirushan Wignakumar, Vikranth Mirle, Arianna Joy Edobor, John Fung, Jessica Scott Donington, Kumaran Shanmugarajah, Yiliang Lin, Eugene Chang, Glenn Randall, Pablo Penaloza-MacMaster, Bozhi Tian and Maria Lucia Madariaga. 

Reference: “Nanotraps for the containment and clearance of SARS-CoV-2” by Min Chen, Jillian Rosenberg, Xiaolei Cai, Andy Chao Hsuan Lee, Jiuyun Shi, Mindy Nguyen, Thirushan Wignakumar, Vikranth Mirle, Arianna Joy Edobor, John Fung, Jessica Scott Donington, Kumaran Shanmugarajah, Yiliang Lin, Eugene Chang, Glenn Randall, Pablo Penaloza-MacMaster, Bozhi Tian, Maria Lucia Madariaga and Jun Huang, 19 April 2021, Matter.
DOI: 10.1016/j.matt.2021.04.005

Funding: National Institutes of Health, National Science Foundation, NIDDK.

 Provided by UNIVERSITY OF CHICAGO

Source: SciTechDaily

Nanotechnology Breakthrough: A Material-Keyboard Made of Graphene

THE MATERIAL KEYBOARD REALIZED BY THE ETH ZURICH RESEARCHERS. BY APPLYING ELECTRIC VOLTAGES (“KEYS”) AT DIFFERENT POINTS, THE MAGIC-​ANGLE GRAPHENE CAN BECOME LOCALLY SUPERCONDUCTING (ELECTRON PAIRS) OR ISOLATING (BARRIER ON THE RIGHT). CREDIT: ETH ZURICH / F. DE VRIES
Material Keyboard

Researchers at ETH Zurich have succeeded in turning specially prepared graphene flakes either into insulators or into superconductors by applying an electric voltage. This technique even works locally, meaning that in the same graphene flake regions with completely different physical properties can be realized side by side.

The production of modern electronic components requires materials with very diverse properties. There are isolators, for instance, which do not conduct electric current, and superconductors which transport it without any losses. To obtain a particular functionality of a component one usually has to join several such materials together. Often that is not easy, in particular when dealing with nanostructures that are in widespread use today.

A team of researchers at ETH Zurich led by Klaus Ensslin and Thomas Ihn at the Laboratory for Solid State Physics have now succeeded in making a material behave alternately as an insulator or as a superconductor – or even as both at different locations in the same material – by simply applying an electric voltage. Their results have been published in the scientific journal Nature Nanotechnology. The work was supported by the National Centre of Competence in Research QSIT (Quantum Science and Technology).

Graphene with a magic angle

The material Ensslin and his co-​workers use bears the somewhat cumbersome name “Magic Angle Twisted Bilayer Graphene.” In actual fact, this name hides something rather simple and well-​known, namely carbon – albeit in a particular form and with a special twist. The starting point for the material are graphene flakes, which are carbon layers that are only one atom thick. The researchers put two of those layers on top of each other in such a way that their crystal axes are not parallel, but rather make a “magic angle” of exactly 1.06 degrees. “That’s pretty tricky, and we also need to accurately control the temperature of the flakes during production. As a result, it often goes wrong,” explains Peter Rickhaus, who was involved in the experiments as a postdoc.

In twenty percent of the attempts, however, it works, and the atomic crystal lattices of the graphene flakes then create a so-​called moiré pattern in which the electrons of the material behave differently than in ordinary graphene. Moiré patterns are familiar from television, for instance, where the interplay between a patterned garment and the scanning lines of the television image can lead to interesting optical effects. On top of the magic angle graphene flakes the researchers attach several additional electrodes which they can use to apply an electric voltage to the material. When they then cool everything down to a few hundredths of a degree above absolute zero, something remarkable happens. Depending on the applied voltage, the graphene flakes behave in two completely opposite ways: either as a superconductor or as an insulator. This switchable superconductivity was already demonstrated in 2018 at the Massachusetts Institute of Technology (MIT) in the USA. Even today only a few groups worldwide are able to produce such samples.

Electron microscope image of the Josephson junction (false colours). Using the electrodes (bright and dark gold) as piano keys, an insulating layer only 100 nanometres thick can be created between the two superconducting regions. Credit: ETH Zurich / F. de Vries

Insulator and superconductor in the same material

Ensslin and his colleagues are now going one step further. By applying different voltages to the individual electrodes they turn the magic angle graphene into an insulator in one spot, but a few hundred nanometres to one side it becomes a superconductor.

“When we saw that, we obviously first tried to realize a Josephson junction,” says Fokko de Vries, who is also a postdoc in Ensslins laboratory. In such junctions two superconductors are separated by a wafer-​thin insulating layer. In this way, current cannot flow directly between the two superconductors but rather has to tunnel quantum mechanically through the insulator. That, in turn, causes the conductivity of the contact to vary as a function of the current in a characteristic fashion, depending on whether direct or alternating current is used.

Possible applications in quantum technologies

The ETH researchers managed to produce a Josephson junction inside the graphene flakes twisted by the magic angle by using different voltages applied to the three electrodes, and also to measure its properties. “Now that that’s worked as well, we can try our hands at more complex devices such as SQUIDs,” says de Vries. In SQUIDs (“superconducting quantum interference device”) two Josephson junctions are connected to form a ring. Practical applications of such devices include measurements of tiny magnetic fields, but also modern technologies such as quantum computers. For possible uses in quantum computers, an interesting aspect is that with the help of the electrodes the graphene flakes can be turned not just into insulators and superconductors, but also into magnets or so-​called topological insulators, in which current can only flow in one direction along the edge of the material. This could be exploited to realize different kinds of quantum bits (qubits) in a single device.

A keyboard for materials

“So far, however, that’s just speculation,” Ensslin says. Still, he is enthusiastic about the possibilities that arise from the electrical control even now. “With the electrodes, we can practically play the piano on the graphene.” Amongst other things, the physicists hope that this will help them to gain new insights into the detailed mechanisms that bring about superconductivity in magic angle graphene.

Reference: “Gate-defined Josephson junctions in magic-angle twisted bilayer graphene” by Folkert K. de Vries, Elías Portolés, Giulia Zheng, Takashi Taniguchi, Kenji Watanabe, Thomas Ihn, Klaus Ensslin and Peter Rickhaus, 3 May 2021, Nature Nanotechnology.
DOI: 10.1038/s41565-021-00896-2

Provided by ETH ZURICH

Source: SciTechDaily

Study reveals new mechanism of lung tissue regeneration

lung
Credit: CC0 Public Domain

New research performed in mice models at Penn Medicine shows, mechanistically, how the infant lung regenerates cells after injury differently than the adult lung, with alveolar type 1 (AT1) cells reprograming into alveolar type 2 (AT2) cells (two very different lung alveolar epithelial cells), promoting cell regeneration, rather than AT2 cells differentiating into AT1 cells, which is the most widely accepted mechanism in the adult lung. These study findings, published today in Cell Stem Cell, show that the long-held assumption that AT1 and AT2 cells behave the same way in children and in adults is untrue.

The lung alveolus is where gas is exchanged between the environment and blood. While scientists have known about two very different lung alveolar epithelial cells since the 1940s, there had not been much insight into them on a molecular level before now. Furthermore, these findings reveal the molecular pathway that allows for this transformation. The Penn researchers also showed that by turning off this pathway, they could reprogram adult AT1 cells into AT2 cells. This unveils a previously unappreciated level of age-dependent cell plasticity, which could explain, in part, why pediatric lungs are not as heavily impacted by COVID-19 as adult lungs, and is a major step forward in understanding lung regeneration as a form of lung therapy.

“COVID-19 has led to millions of people contracting a terrible and damaging respiratory infection, which causes severe lung injury. Some of these patients are likely to have long-term chronic lung issues, with some severe enough to need a lung transplant. We are hopeful that our research on how these alveolar cells respond to acute injury will provide new targets that could be leveraged for the development of future therapies to treat acute lung injury, and that one day we will know how to manipulate these cell pathways so that lung tissue can regenerate and heal itself, without the need for organ transplant,” said the study’s senior author, Edward Morrisey, Ph.D., the Robinette Foundation Professor of Cardiovascular Medicine and the Director of the Penn-CHOP Lung Biology Institute (LBI) in the Perelman School of Medicine at the University of Pennsylvania.

The researchers analyzed changes in gene expression and the epigenome in mouse AT1 and AT2 cells across the lifespan. They compared these changes to those observed after acute lung injury and found that the current paradigm of how adult lungs repair themselves did not hold true for immature or mature mouse lungs.

“Scientists have long assumed that the one-way process of cell differentiation that has been well documented in the adult lung would also hold true in the infant lung, but those assumptions were overturned. We discovered that in pediatric lungs the direction of differentiation is in reverse after injury, whereas in the adult it’s much more of a two-way street. In all of these contexts, it is controlled by a pathway called Hippo signaling,” said the study’s first author, Ian J. Penkala, a University of Pennsylvania VMD/Ph.D. student who works in the Morrisey Lab.

In the adult lung, regeneration of the lung cells is driven by the AT2 cell population expanding and differentiating into AT1 cells. The researchers also showed that after some acute lung injuries, adult AT1 cells can robustly reprogram into AT2 cells. However, in infant mice, AT2 cells do not efficiently regenerate AT1 cells after acute lung injury. Rather, AT1 cells reprogram into AT2 cells after injury, and it is these reprogrammed AT2 cells that can ultimately proliferate after injury.

Mouse lungs are somewhat similar to human lungs in that they both have AT1 and AT2 cells, increasing the likelihood that the conclusions in this study also hold true for human lungs. Research is expanding the mechanisms that can develop future therapies for acute lung injury. Normally lungs have the ability to repair and regenerate as they are constantly exposed to pollution and microbes from the external environment. The next phase in this research would be to determine whether harnessing the Hippo pathway can help promote the lung’s natural ability to regenerate after injury.

“What this discovery provides is insight into a cell pathway that we can manipulate, possibly in the future with pharmaceutical therapies. This helps us build a map of how lung cells respond, and could have major implications down the line on how we care for patients with chronic lung disease,” Morrisey said.

More information:Cell Stem Cell (2021). DOI: 10.1016/j.stem.2021.04.026Journal information:Cell Stem Cell

Provided by Perelman School of Medicine at the University of Pennsylvania

Source: Medical Xpress

Diagramming the brain with colorful connections

Diagramming the brain with colorful connections
In this field of brain cells, each color is a unique neuron stained for a particular snippet of RNA–or ‘barcode.’ This technique, called BARseq2, allows scientists to study thousands of cells at a time with their natural connections intact. Credit: Chen and Sun/Zador lab, CSHL/2021

There are billions of neurons in the human brain, and scientists want to know how they are connected. Cold Spring Harbor Laboratory (CSHL) Alle Davis and Maxine Harrison Professor of Neurosciences Anthony Zador, and colleagues Xiaoyin Chen and Yu-Chi Sun, published a new technique in Nature Neuroscience for figuring out connections using genetic tags. Their technique, called BARseq2, labels brain cells with short RNA sequences called “barcodes,” allowing the researchers to trace thousands of brain circuits simultaneously.

Many brain mapping tools allow neuroscientists to examine a handful of individual neurons at a time, for example by injecting them with dye. Chen, a postdoc in Zador’s lab, explains how their tool, BARseq, is different:

“The idea here is that instead of labeling each neuron with a fluorescent dye, we fill it up with a unique RNA sequence, and we call this an RNA barcode. You can sequence the barcode to read out where that neuron sends its projection. You can label like tens of thousands of neurons, all at the same time, and you can still distinguish which axon comes from which neuron.”

The latest generation of this tool, called BARseq2, adds a step to analyze a couple dozen natural neural genes through sequencing chemistry similar to what is used to light up artificial RNA barcodes. Chen says:

“These differences in gene expression usually reflect something that these neurons are doing. For example, if a neuron expresses certain receptors, you will be able to respond to whatever those receptors receive. So compared to anyone that doesn’t have those receptors, they will respond differently to certain signals.”

Diagramming the brain with colorful connections
BARseq2 detects dozens of genes in thousands of neurons in this mouse brain slice. Each color lights up a different set of genes. Credit: Chen and Sun/Zador lab, CSHL/2021

BARseq2 brings brain structure and function together. Sun, a former research technician in Zador’s lab and now a New York University graduate student, compares the brain to a car. To truly understand how a car works, “you look at all aspects, from the physical and electrical properties to the way each piece links together. Similarly, to understand how the brain works, you have to look at all the different aspects of each neuron, including each neuron’s location in the brain, gene expression, and connection to other neurons.”

This project is part of a multi-institutional effort, the NIH-funded BRAIN Initiative Cell Census Network, to compile an atlas of every cell in human, mouse, and non-human primate brains. This work will allow scientists to tease apart how we produce complex behaviors and give researchers new tools to treat brain diseases.

Provided by Cold Spring Harbor Laboratory

Source: Medical Xpress

In the emptiness of space, Voyager 1 detects plasma ‘hum’

Voyager 1
Credit: Pixabay/CC0 Public Domain

Voyager 1—one of two sibling NASA spacecraft launched 44 years ago and now the most distant human-made object in space—still works and zooms toward infinity.

The craft has long since zipped past the edge of the solar system through the heliopause—the solar system’s border with interstellar space—into the interstellar medium. Now, its instruments have detected the constant drone of interstellar gas (plasma waves), according to Cornell University-led research published in Nature Astronomy.

Examining data slowly sent back from more than 14 billion miles away, Stella Koch Ocker, a Cornell doctoral student in astronomy, has uncovered the emission. “It’s very faint and monotone, because it is in a narrow frequency bandwidth,” Ocker said. “We’re detecting the faint, persistent hum of interstellar gas.”

This work allows scientists to understand how the interstellar medium interacts with the solar wind, Ocker said, and how the protective bubble of the solar system‘s heliosphere is shaped and modified by the interstellar environment.

Launched in September 1977, the Voyager 1 spacecraft flew by Jupiter in 1979 and then Saturn in late 1980. Travelling at about 38,000 mph, Voyager 1 crossed the heliopause in August 2012.

After entering interstellar space, the spacecraft’s Plasma Wave System detected perturbations in the gas. But, in between those eruptions—caused by our own roiling sun—researchers have uncovered a steady, persistent signature produced by the tenuous near-vacuum of space.

“The interstellar medium is like a quiet or gentle rain,” said senior author James Cordes, the George Feldstein Professor of Astronomy. “In the case of a solar outburst, it’s like detecting a lightning burst in a thunderstorm and then it’s back to a gentle rain.”

Ocker believes there is more low-level activity in the interstellar gas than scientists had previously thought, which allows researchers to track the spatial distribution of plasma—that is, when it’s not being perturbed by solar flares.

Cornell research scientist Shami Chatterjee explained how continuous tracking of the density of interstellar space is important. “We’ve never had a chance to evaluate it. Now we know we don’t need a fortuitous event related to the sun to measure interstellar plasma,” Chatterjee said. “Regardless of what the sun is doing, Voyager is sending back detail. The craft is saying, ‘Here’s the density I’m swimming through right now. And here it is now. And here it is now. And here it is now.’ Voyager is quite distant and will be doing this continuously.”

Voyager 1 left Earth carrying a Golden Record created by a committee chaired by the late Cornell professor Carl Sagan, as well as mid-1970s technology. To send a signal to Earth, it took 22 watts, according to NASA’s Jet Propulsion Laboratory. The craft has almost 70 kilobytes of computer memory and—at the beginning of the mission—a data rate of 21 kilobits per second.

Due to the 14-billion-mile distance, the communication rate has since slowed to 160-bits-per-second, or about half a 300-baud rate.

More information: Persistent plasma waves in interstellar space detected by Voyager 1, Nature Astronomy (2021). DOI: 10.1038/s41550-021-01363-7Journal information:Nature Astronomy

Provided by Cornell University

Source: Phys.org

Are We on the Brink of a New Age of Scientific Discovery?

Muon g-2 Experiment at Fermilab
The centerpiece of the Muon g-2 experiment at Fermilab is a 50-foot-diameter superconducting magnetic storage ring, which sits in its detector hall amidst electronics racks, the muon beamline and other equipment. Credit: Reidar Hahn, Fermilab

In 2001 at the Brookhaven National Laboratory in Upton, New York, a facility used for research in nuclear and high-energy physics, scientists experimenting with a subatomic particle called a muon encountered something unexpected.

To explain the fundamental physical forces at work in the universe and to predict the results of high-energy particle experiments like those conducted at Brookhaven, Fermilab in Illinois, and at CERN’s Large Hadron Collider in Geneva, Switzerland, physicists rely on the decades-old theory called the Standard Model, which should explain the precise behavior of muons when they are fired through an intense magnetic field created in a superconducting magnetic storage ring. When the muon in the Brookhaven experiment reacted in a way that differed from their predictions, researchers realized they were on the brink of a discovery that could change science’s understanding of how the universe works.

Earlier this month, after a decades-long effort that involved building more powerful sensors and improving researchers’ capacity to process 120 terabytes of data (the equivalent of 16 million digital photographs every week), a team of scientists at Fermilab announced the first results of an experiment called Muon g-2 that suggests the Brookhaven find was no fluke and that science is on the brink of an unprecedented discovery.

UVA physics professor Dinko Počanić has been involved in the Muon g-2 experiment for the better part of two decades, and UVA Today spoke with him to learn more about what it means.

Q. What are the findings of the Brookhaven and Fermilab Muon g-2 experiments, and why are they important?

A. So, in the Brookhaven experiment, they did several measurements with positive and negative muons – an unstable, more massive cousin of the electron – under different circumstances, and when they averaged their measurements, they quantified a magnetic anomaly that is characteristic of the muon more precisely than ever before. According to relativistic quantum mechanics, the strength of the muon’s magnetic moment (a property it shares with a compass needle or a bar magnet) should be two in appropriate dimensionless units, the same as for an electron. The Standard Model states, however, that it’s not two, it’s a little bit bigger, and that difference is the magnetic anomaly. The anomaly reflects the coupling of the muon to pretty much all other particles that exist in nature. How is this possible?

The answer is that space itself is not empty; what we think of as a vacuum contains the possibility of the creation of elementary particles, given enough energy. In fact, these potential particles are impatient and are virtually excited, sparking in space for unimaginably short moments in time. And as fleeting as it is, this sparking is “sensed” by a muon, and it subtly affects the muon’s properties. Thus, the muon magnetic anomaly provides a sensitive probe of the subatomic contents of the vacuum.

To the enormous frustration of all the practicing physicists of my generation and younger, the Standard Model has been maddeningly impervious to challenges. We know there are things that must exist outside of it because it cannot describe everything that we know about the universe and its evolution. For example, it does not explain the prevalence of matter over antimatter in the universe, and it doesn’t say anything about dark matter or many other things, so we know it’s incomplete. And we’ve tried very hard to understand what these things might be, but we haven’t found anything concrete yet.

So, with this experiment, we’re challenging the Standard Model with increasing levels of precision. If the Standard Model is correct, we should observe an effect that is completely consistent with the model because it includes all the possible particles that are thought to be present in nature, but if we see a different value for this magnetic anomaly, it signifies that there’s actually something else. And that’s what we’re looking for: this something else.

This experiment tells us that we’re on the verge of a discovery.

Q. What part have you been able to play in the experiment?

A. I became a member of this collaboration when we had just started planning for the follow-up to the Brookhaven experiment around 2005, just a couple of years after the Brookhaven experiment finished, and we were looking at the possibility of doing a more precise measurements at Brookhaven. Eventually that idea was abandoned, as it turned out that we could do a much better job at Fermilab, which had better beams, more intense muons and better conditions for experiment.

So, we proposed that around 2010, and it was approved and funded by U.S. and international funding agencies. An important part was funded by a National Science Foundation Major Research Instrumentation grant that was awarded to a consortium of four universities, and UVA was one of them. We were developing a portion of the instrumentation for the detection of positrons that emerge in decays of positive muons. We finished that work, and it was successful, so my group switched focus to the precise measurements of the magnetic field in the storage ring at Fermilab, a critical part of quantifying the muon magnetic anomaly. My UVA faculty colleague Stefan Baessler has also been working on this problem, and several UVA students and postdocs have been active on the project over the years.

Q. Fermilab has announced that these are just the first results of the experiment. What still needs to happen before we’ll know what this discovery means?

A. It depends on how the results of our analysis of the yet-unanalyzed run segments turn out. The analysis of the first run took about three years. The run was completed in 2018, but I think now that we we’ve ironed out some of the issues in the analysis, it might go a bit faster. So, in about two years it would not be unreasonable to have the next result, which would be quite a bit more precise because it combines runs two and three. Then there will be another run, and we will probably finish taking data in another two years or so. The precise end of measurements is still somewhat uncertain, but I would say that about five years from now, maybe sooner, we should have a very clear picture.

Q. What kind of impact could these experiments have on our everyday lives?

A. One way is in pushing specific technologies to the extreme in solving different aspects of measurement to get the level of precision we need. The impact would likely come in fields like physics, industry and medicine. There will be technical spinoffs, or at least improvements in techniques, but which specific ones will come out of this, it’s difficult to predict. Usually, we push companies to make products that we need that they wouldn’t otherwise make, and then a new field opens up for them in terms of applications for those products, and that’s what often happens. The World Wide Web was invented, for example, because researchers like us needed to be able to exchange information in an efficient way across great distances, around the world, really, and that’s how we have, well, web browsers, Zoom, Amazon and all these types of things today.

The other way we benefit is by educating young scientists – some of whom will continue in the scientific and academic careers like myself – but others will go on to different fields of endeavor in society. They will bring with them an expertise in very high-level techniques of measurement and analysis that aren’t normally found in many fields.

And then, finally, another outcome is intellectual betterment. One outcome of this work will be to help us better understand the universe we live in.

Q. Could we see more discoveries like this in the near future?

A. Yes, there is a whole class of experiments besides this one that look at highly precise tests of the Standard Model in a number of ways. I’m always reminded of the old adage that if you lose your keys in the street late at night, you are first going to look for them under the street lamp, and that’s what we’re doing. So everywhere there’s a streetlight, we’re looking. This is one of those places – and there are several others, well, I would say dozens of others, if you also include searches that are going on for subatomic particles like axions, dark matter candidates, exotic processes like double beta decay, and those kinds of things. One of these days, new things will be found.

We know that the Standard Model is incomplete. It’s not wrong, insofar as it goes, but there are things outside of it that it does not incorporate, and we will find them.

Provided by UNIVERSITY OF VIRGINIA

Source: SciTechDaily

AI “Magic” Just Removed One of the Biggest Roadblocks in Astrophysics

SIMULATIONS OF A REGION OF SPACE 100 MILLION LIGHT-YEARS SQUARE. THE LEFTMOST SIMULATION RAN AT LOW RESOLUTION. USING MACHINE LEARNING, RESEARCHERS UPSCALED THE LOW-RES MODEL TO CREATE A HIGH-RESOLUTION SIMULATION (RIGHT). THAT SIMULATION CAPTURES THE SAME DETAILS AS A CONVENTIONAL HIGH-RES MODEL (MIDDLE) WHILE REQUIRING SIGNIFICANTLY FEWER COMPUTATIONAL RESOURCES. CREDIT: Y. LI ET AL./PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES 2021
Astrophysics Machine Learning Simulation Snapshots

Using neural networks, Flatiron Institute research fellow Yin Li and his colleagues simulated vast, complex universes in a fraction of the time it takes with conventional methods.

Using a bit of machine learning magic, astrophysicists can now simulate vast, complex universes in a thousandth of the time it takes with conventional methods. The new approach will help usher in a new era in high-resolution cosmological simulations, its creators report in a study published online on May 4, 2021, in Proceedings of the National Academy of Sciences.

“At the moment, constraints on computation time usually mean we cannot simulate the universe at both high resolution and large volume,” says study lead author Yin Li, an astrophysicist at the Flatiron Institute in New York City. “With our new technique, it’s possible to have both efficiently. In the future, these AI-based methods will become the norm for certain applications.”

The new method developed by Li and his colleagues feeds a machine learning algorithm with models of a small region of space at both low and high resolutions. The algorithm learns how to upscale the low-res models to match the detail found in the high-res versions. Once trained, the code can take full-scale low-res models and generate ‘super-resolution’ simulations containing up to 512 times as many particles.

The process is akin to taking a blurry photograph and adding the missing details back in, making it sharp and clear.

This upscaling brings significant time savings. For a region in the universe roughly 500 million light-years across containing 134 million particles, existing methods would require 560 hours to churn out a high-res simulation using a single processing core. With the new approach, the researchers need only 36 minutes.

The results were even more dramatic when more particles were added to the simulation. For a universe 1,000 times as large with 134 billion particles, the researchers’ new method took 16 hours on a single graphics processing unit. Existing methods would take so long that they wouldn’t even be worth running without dedicated supercomputing resources, Li says.

Li is a joint research fellow at the Flatiron Institute’s Center for Computational Astrophysics and the Center for Computational Mathematics. He co-authored the study with Yueying Ni, Rupert Croft and Tiziana Di Matteo of Carnegie Mellon University; Simeon Bird of the University of California, Riverside; and Yu Feng of the University of California, Berkeley.

Cosmological simulations are indispensable for astrophysics. Scientists use the simulations to predict how the universe would look in various scenarios, such as if the dark energy pulling the universe apart varied over time. Telescope observations may then confirm whether the simulations’ predictions match reality. Creating testable predictions requires running simulations thousands of times, so faster modeling would be a big boon for the field.

Reducing the time it takes to run cosmological simulations “holds the potential of providing major advances in numerical cosmology and astrophysics,” says Di Matteo. “Cosmological simulations follow the history and fate of the universe, all the way to the formation of all galaxies and their black holes.”

So far, the new simulations only consider dark matter and the force of gravity. While this may seem like an oversimplification, gravity is by far the universe’s dominant force at large scales, and dark matter makes up 85 percent of all the ‘stuff’ in the cosmos. The particles in the simulation aren’t literal dark matter particles but are instead used as trackers to show how bits of dark matter move through the universe.

The team’s code used neural networks to predict how gravity would move dark matter around over time. Such networks ingest training data and run calculations using the information. The results are then compared to the expected outcome. With further training, the networks adapt and become more accurate.

The specific approach used by the researchers, called a generative adversarial network, pits two neural networks against each other. One network takes low-resolution simulations of the universe and uses them to generate high-resolution models. The other network tries to tell those simulations apart from ones made by conventional methods. Over time, both neural networks get better and better until, ultimately, the simulation generator wins out and creates fast simulations that look just like the slow conventional ones.

“We couldn’t get it to work for two years,” Li says, “and suddenly it started working. We got beautiful results that matched what we expected. We even did some blind tests ourselves, and most of us couldn’t tell which one was ‘real’ and which one was ‘fake.’”

Despite only being trained using small areas of space, the neural networks accurately replicated the large-scale structures that only appear in enormous simulations.

The simulations don’t capture everything, though. Because they focus only on dark matter and gravity, smaller-scale phenomena — such as star formation, supernovae and the effects of black holes — are left out. The researchers plan to extend their methods to include the forces responsible for such phenomena, and to run their neural networks ‘on the fly’ alongside conventional simulations to improve accuracy. “We don’t know exactly how to do that yet, but we’re making progress,” Li says.

Reference: “AI-assisted superresolution cosmological simulations” by Yin Li, Yueying Ni, Rupert A. C. Croft, Tiziana Di Matteo, Simeon Bird and Yu Feng, 4 May 2021, Proceedings of the National Academy of Sciences.
DOI: 10.1073/pnas.2022038118

Source: ScitechDaily

Vegetarians have healthier levels of disease markers than meat-eaters

vegetarian
Credit: CC0 Public Domain

Vegetarians appear to have a healthier biomarker profile than meat-eaters, and this applies to adults of any age and weight, and is also unaffected by smoking and alcohol consumption, according to a new study in over 166,000 UK adults, being presented at this week’s European Congress on Obesity (ECO), held online this year.

Biomarkers can have bad and good health effects, promoting or preventing cancer, cardiovascular and age-related diseases, and other chronic conditions, and have been widely used to assess the effect of diets on health. However, evidence of the metabolic benefits associated with being vegetarian is unclear.

To understand whether dietary choice can make a difference to the levels of disease markers in blood and urine, researchers from the University of Glasgow did a cross-sectional study analysing data from 177,723 healthy participants (aged 37-73 years) in the UK Biobank study, who reported no major changes in diet over the last five years.

Participants were categorised as either vegetarian (do not eat red meat, poultry or fish; 4,111 participants) or meat-eaters (166,516 participants) according to their self-reported diet. The researchers examined the association with 19 blood and urine biomarkers related to diabetes, cardiovascular diseases, cancer, liver, bone and joint health, and kidney function.

Even after accounting for potentially influential factors including age, sex, education, ethnicity, obesity, smoking, and alcohol intake, the analysis found that compared to meat-eaters, vegetarians had significantly lower levels of 13 biomarkers, including: total cholesterol; low-density lipoprotein (LDL) cholesterol—the so-called ‘bad cholesterol; apolipoprotein A (linked to cardiovascular disease), apolipoprotein B (linked to cardiovascular disease); gamma-glutamyl transferase (GGT) and alanine aminotransferase (AST)—liver function markers indicating inflammation or damage to cells; insulin-like growth factor (IGF-1; a hormone that encourages the growth and proliferation of cancer cells); urate; total protein; and creatinine (marker of worsening kidney function).

However, vegetarians also had lower levels of beneficial biomarkers including high-density lipoprotein ‘good’ (HDL) cholesterol, and vitamin D and calcium (linked to bone and joint health). In addition, they had significantly higher level of fats (triglycerides) in the blood and cystatin-C (suggesting a poorer kidney condition).

No link was found for blood sugar levels (HbA1c), systolic blood pressure, aspartate aminotransferase (AST; a marker of damage to liver cells) or C-reactive protein (CRP; inflammatory marker).

“Our findings offer real food for thought”, says Dr. Carlos Celis-Morales from the University of Glasgow, UK, who led the research. “As well as not eating red and processed meat which have been linked to heart diseases and some cancers, people who follow a vegetarian diet tend to consume more vegetables, fruits, and nuts which contain more nutrients, fibre, and other potentially beneficial compounds. These nutritional differences may help explain why vegetarians appear to have lower levels of disease biomarkers that can lead to cell damage and chronic disease.”

The authors point out that although their study was large, it was observational, so no conclusions can be drawn about direct cause and effect. They also note several limitations including that they only tested biomarker samples once for each participant, and it is possible that biomarkers might fluctuate depending on factors unrelated to diet, such as existing diseases and unmeasured lifestyle factors. They also note that were reliant on participants to report their dietary intake using food frequency questionnaires, which is not always reliable.

More information: This story is based on poster presentation EP3-33 at the European Congress on Obesity (ECO).

Provided by European Association for the Study of Obesity

Source: Medical Xpress

Create your website with WordPress.com
Get started
%d bloggers like this: