What are stablecoins? A blockchain expert explains

Lupe über dem Tether-Logo | ✓ Marco Verch is a Professional … | Flickr
Credit by Flickr/CCO Public Domain

Stablecoins are a type of cryptocurrency linked to an asset like the U.S. dollar that doesn’t change much in value.

The majority of the dozens of stablecoins that currently exist use the dollar as their benchmark asset, but many are also pegged to other fiat currencies issued by governments like the euro and yen. As a result, the price of stablecoins fluctuates very little, unlike high-profile cryptocurrencies like bitcoin and ethereum that are prone to sudden ups and downs.

The first stablecoin, created in 2014, was Tether, which many other stablecoins are modeled after. Users receive one token for every dollar they deposit. In theory, the tokens can then be converted back into the original currency at any time, also at a one-for-one exchange rate.

As of July 28, 2021, there were about US$62 billion in Tether outstanding, or a bit more than half of the $117 billion market capitalization of all stablecoins worldwide. The next-largest is known as USD Coin, which has a market cap of about $27 billion.

Why stablecoins matter

Originally, stablecoins were primarily used to buy other cryptocurrencies, like bitcoin, because many cryptocurrency exchanges didn’t have access to traditional banking. They are more useful than country-issued currencies because you can use them 24 hours a day, seven days a week, anywhere in the world – without relying on banks. Money transfers take seconds to complete.

Another useful feature of stablecoins is that they can work with so-called smart contracts on blockchains, which, unlike conventional contracts, require no legal authority to be executed. The code in the software automatically dictates the terms of the agreement and how and when money will be transferred. This makes stablecoins programmable in ways that dollars can’t be.

Smart contracts have given rise to the use of stablecoins not only in seamless trading but also lending, payments, insurance, prediction markets and decentralized autonomous organizations – businesses that operate with limited human intervention.

Collectively, these software-based financial services are known as decentralized finance, or DeFi.

Proponents hold that moving money via stablecoins is faster, cheaper and easier to integrate into software compared with fiat currency.

Others say the lack of regulation creates big risks for the financial systems. In a recent paper, economists Gary B. Gorton and Jeffery Zhang draw an analogy to the middle of the 19th century era when banks issued their own private currencies. They say stablecoins could lead to the same problems observed in that era, when there were frequent runs because people couldn’t agree on the value of privately issued currencies.

Worried that stablecoins could pose risks to the financial system, regulators have also taken greater interest in them recently.

Written by Stephen McKeon, Associate Professor of Finance, University of Oregon

Originally published on The Conversation

Advantages of Intranasal COVID-19 Vaccinations Over Injections

Nasal Vaccination

Of the nearly 100 SARS-CoV-2 vaccines currently undergoing clinical trials, only seven are delivered intranasally — despite this vaccine type’s long success in providing protection from influenza.

In a Perspective, Frances Lund and Troy Randall argue that intranasal vaccines could be beneficial in the continued fight against COVID-19, especially considering respiratory viruses like SARS-CoV-2 predominantly enter the nasal passage first.

Currently authorized COVID-19 vaccines are delivered via intramuscular injection, where they elicit systemic immune responses and central immune memory. While several versions are currently being administered worldwide, many more are in development. However, according to the authors, given the respiratory propensity of the virus, it is surprising that so few intranasal vaccines, which deliver their antigens directly to the site of infection, are being considered.

Here, Lund and Randal discuss the potential of intranasal COVID-19 vaccines, highlighting their advantages, drawbacks and rationale for use over intramuscular options. In addition to being needle-free, intranasal vaccines provide two additional layers of protection compared to intramuscular vaccines. Intranasal vaccine-elicited immunoglobulin A (IgA) and resident memory B and T cells in the nasal passages and upper airways provide a barrier to infection, impede viral replication and reduce viral shedding.

Lund and Randall note that effective vaccination strategies need not be restricted to a single delivery system and suggest that an ideal vaccination strategy may consist of an intramuscular vaccine combined with an intranasal booster.

For more on this perspective, read Scent of a Vaccine: Many Advantages to Intranasal COVID-19 Vaccination.

Reference: “Scent of a vaccine” by Frances E. Lund and Troy D. Randall, 23 July 2021, Science.
DOI: 10.1126/science.abg9857

Provided by:  AMERICAN ASSOCIATION FOR THE ADVANCEMENT OF SCIENCE (AAAS)

Source: SciTechDaily

Science Made Simple: What Is Bioenergy Research?

Abstract Bioenergy Biofuel Concept

Bioenergy research studies how to use crops and other agricultural materials to make biofuels and other bioproducts. Biomass energy would improve energy security. It would reduce the use of toxic chemicals. It would bring jobs to rural areas and improve our trade balance. To achieve these benefits, bioenergy research integrates many disciplines that include agronomy, biology, chemistry, engineering, and economics. These disciplines work together to advance research on the sustainable production, collection, and conversion of biomass.

Scientists use insights from studies of plants and microorganisms as the basis for bioenergy development. These studies are based on genomics, which studies the structure, function, evolution, and mapping of the genes in organisms. Scientists use this knowledge to develop plant species with modified traits, such as altered cell walls that make them easier to break down, making them useful as raw material for bioenergy production. Scientists can also modify the chemical reactions in a microorganism. These alterations allow microorganisms to convert compounds derived from plants into fuels and chemicals.

Bioenergy Crop Research
To improve the sustainability of crops and other agricultural material used for energy production, researchers are studying the associations of crop roots with fungi to improve the uptake of nutrients from the soil. Credit: Photo courtesy of the DOE Great Lakes Bioenergy Research Center

Bioenergy Research Facts

  • Sustainability research conducts long-term studies of bioenergy crop production systems and analyses for biomass supply.
  • Feedstock development research designs dedicated bioenergy crops and engineers plants for efficient conversion into fuels and products.
  • Plant deconstruction research covers processes that help degrade and separate biomass to facilitate conversion to bioproducts.
  • Conversion research focuses on developing new microorganisms that convert biomass materials into fuels, biomass fuels that easily integrate with existing gasoline and other conventional fuel infrastructure, and high-throughput biology tools to scale up biomass conversion.

DOE Office of Science & Bioenergy Research

DOE’s Office of Science seeks a basic understanding of plant and microbial biology to unlock Nature’s potential to produce renewable fuels and chemicals. Scientists must identify promising plant and microbial species as well as study how to promote the sustainable growth of bioenergy crops. They need to research modifying plants and microorganisms to support beneficial traits. In addition, they need to integrate these efforts to produce biofuel and bioproducts. These efforts are in progress in the DOE Bioenergy Research Centers. These four centers are working to lay the scientific groundwork for a new bio-based economy.  Their goal is to coordinate with applied researchers to help develop a range of new products and fuels derived directly from renewable, nonfood biomass.

Provided by: U.S. DEPARTMENT OF ENERGY

Source: SciTechDaily

The thinnest CD-RW: Atomic-scale data storage possible

disk
Credit: Pixabay/CC0 Public Domain

Using a focused laser beam, scientists can manipulate properties of nanomaterials, thus ‘writing’ information onto monolayer materials. By this means, the thinnest light disk at atomic level was demonstrated.

The bottleneck in atomic-scale data storage area may be broken by a simple technique, thanks to recent innovative studies conducted by scientists from Nanjing Normal University (NJNU) and Southeast University (SEU).

Through a simple, efficient and low-cost technique involving the focused laser beam and ozone treatment, the NJNU and SEU research teams, leading by Prof. Hongwei Liu, Prof. Junpeng Lu and Prof. Zhenhua Ni demonstrated that the photoluminescence (PL) emission of WS2 monolayers can be controlled and modified, and consequently, it works as the thinnest light disk with rewritable data storage and encryption capability.

“In our childhood, most of us are likely to have experience of focusing sunlight onto a piece of paper by magnifying glass and trying to ignite the paper. The scorched spot on paper is a sort of data recording at the moment. Instead of focusing sunlight, we focus laser beam on modified atomic level materials and study effects of the focused laser beam on PL emissions of the materials,” said Prof. Lu.

Data storage and encryption: information ‘drawn’ on ozone treated WS2 films

Owing to its advantage of direct visibility, PL is usually considered as an ideal technology in terms of encryption and decryption data storage. For a straightforward and effective encryption data storage method, the following aspects are desired: (i) direct writing (fast writing-in speed); (ii) high security level; (iii) large data storage capacity; (iv) visual decryption reading; (v) erasing capability.

To address these technological challenges, researchers demonstrate the thinnest light disk with encryption functionality.

The write-through and erasable encryption are realized on WS2 monolayers. The writing-in and reading-out of information are enabled by the directly controlling of fluorescence contrast of WS2 monolayers. Ozone and focused laser beam scanning are employed to on-demand manipulate PL emission and realize encryption.

With this simple and low cost approach, the scientists were able to use the focused laser beam to selectively ‘write’ information onto any region of the film to storage encrypted data. In addition, the written data are erasable, making the monolayer light disk reusable.

Interestingly, the evolution of PL emission with different writing laser powers could be used to assign different gray levels. The 16 gray levels assignment indicates a typical triangle WS2 monolayer with the side length of 60 μm can storage ~1 KB data. Owing the high spatial resolution and power sensitivity, the storage capacity within 1 nm thickness could be up to ~62.5 MB/cm2 and the writing speed can reach ~6.25 MB/s. This technology will be beneficial to extend the optical encryption into low dimensional regime, offering an unexpected information-secure solution to exchange data.

This innovation was first published online in the journal Advanced Functional Materials on 24 June 2021.

The fast-growing information field demands higher security and larger storage capability. To develop light disk that cater to the industry standard, The research teams from NJNU and SEU will extend the versatile focused laser beam technique to wafer-scale monolayer material. In addition, they will look into further improving the storge capability of light disk via normal direction stacking.

More information: Weiwei Zhao et al, The Thinnest Light Disk: Rewritable Data Storage and Encryption on WS2 Monolayers, Advanced Functional Materials (2021). DOI: 10.1002/adfm.202103140Journal information:Advanced Functional Materials

Provided by: Nanjing Normal University

Source: PHYS.ORG

Harvard Researchers Discover a New Real-Life Spidey Sense

Jumping Spider Close Up

Harvard study shows jumping spiders can distinguish living objects from non-living objects based on their movement.

Add this to the list of real-life spidey senses: Harvard researchers have shown that jumping spiders are able to tell the difference between animate objects and inanimate objects — an ability previously known only in vertebrates, including humans.

Using a specialized treadmill system and a point-light display animation, the team of scientists found that these spiders are able to recognize biological motion. This type of motion refers to the visual movements that come from living organisms when they are moving. The visual cue is how people, even babies, can tell someone is another person just by the way their bodies move. Many animals can do this, too.

The ability, which is critical for survival, is evolutionarily ancient since it is so widespread across vertebrates. The study from the Harvard team is believed to be the first demonstration of biological motion recognition in an invertebrate. The findings pose crucial questions about the evolutionary history of the ability and complex visual processing in non-vertebrates.

“[It] opens the possibility that such mechanisms might be widespread across the animal kingdom and not necessarily related to sociality,” the researchers wrote in the paper, which was published in PLOS Biology on July 15, 2021.

The study was authored by a team of researchers working in the lab of Paul Shamble, a John Harvard Distinguished Science Fellow. Massimo De Agrò (previously a postdoctoral researcher in the lab and now at the Animal Comparative Economics laboratory at the University of Regensburg, Germany) lead the project, along with co-authors Daniela C. Rößler (now a Zukunftskolleg Postdoctoral Fellow at the University of Konstanz and the Max Planck Institute of Animal Behavior), and Kris Kim (a researcher in the Shamble Lab).

The researchers chose jumping spiders to test biological motion cues because the animals are among the most visually adept of all arthropods. With eight eyes, for example, vision plays a central role in a wide range of behaviors.

They placed the jumping spiders, a species called Menemerus semilimbatus, into a forced-choice experiment. They suspended the spiders above a spherical treadmill so their legs could make contact with it. The spiders were kept in a fixed position so only its legs could move, transferring its intended direction to the sphere which spun freely because of a constant stream of compressed air shooting up below it.

(Friendly disclaimer: No spiders were harmed during the experiment and all were freed in the same place they were captured afterward.)

Once in position, the spiders have presented two animations as stimuli. The animations were called point-light displays, each consisting of a dozen or so small lights (or points) that were attached to key joints of another spider so they could record its movements. The body itself is not visible, but the digital points give a body-plan outline and impression of a living organism. In humans, for example, it only takes about eleven dots on the main joints of the body for observers to correctly identify it as another person.

For the spiders, the displays followed the motion of another spider walking. Most of the displays gave the impression of seeing a living animal. Some of the displays were less real than others and one called a random display, did not give the impression it was living.

The researchers then observed how the spiders reacted and which light display they turned toward on the treadmill. They found the spiders reacted to the different point-light displays by pivoting and facing them directly, which indicated that the spiders were able to recognize biological motion.

Curiously, the team found the spiders preferred rotating towards the more artificial displays and always toward the random one when it was part of the choice. They initially thought they would turn more toward the displays simulating another spider and possible danger, but the behavior made sense in the context of jumping spiders and how their secondary set of eyes work to decode information.

“The secondary eyes are looking at this point-light display of biological motion and it can already understand it, whereas the other random motion is weird and they don’t understand what’s there,” De Agrò said.

The researchers hope to look into biological motion recognition in other invertebrates such as other insects or mollusks. The findings could lead to a greater understanding of how these creatures perceive the world, De Agrò said.

For more on this research, read How Spiders Can Distinguish Living From Non-Living Objects in Their Peripheral Vision.

Reference: “Perception of biological motion by jumping spiders” by Massimo De Agrò, Daniela C. Rößler, Kris Kim, and Paul S. Shamble, 15 July 2021, PLOS Biology.
DOI: 10.1371/journal.pbio.3001172

Provided by:  HARVARD UNIVERSITY

Source: SciTechDaily

What we know about the SARS-CoV-2 Delta variant

What we know about the SARS-CoV-2 Delta variant
The R0 (reproduction number) of SARS-CoV-2 variants and other diseases. The higher the R0 number, the more contagious the disease is. Credit: Imperial College London, Lancet, Australian Government

The Delta variant is likely to become the most dominant strain globally. What does that mean for current and future variants? Natural selection has shaped the evolution of all living things on our planet, including viruses. While mutations emerge in viruses, some mutations have little impact while others outcompete other variants and persist, such as the SARS-CoV-2 variant, Delta—classified a variant of concern (VOC) by the World Health Organization (WHO).

Last week, Dr. Poonam Khetrapal Singh, the regional director of the WHO, South-East Asia said, “The Delta variant has spread to over a hundred countries and is likely to soon become the most dominant COVID-19 strain globally. Among the variants of concern, Delta spreads most rapidly.”

Researchers have indicated the Delta variant is the most transmissible variant yet—as much as 60 percent more contagious than the Alpha variant. However, there is limited research in terms of whether or not the Delta variant causes more severe illness than other variants.

The characterisation of variants by the WHO came about in late 2020 as a result of variants that posed an increased risk to public health. These definitions assist in prioritizing global monitoring, research, and ultimately informing the ongoing response to the COVID-19 pandemic.

What is a variant of concern?

Currently, the Delta variant joins three other variants in this category—Alpha, which was first detected in the UK in September 2020; Beta, the earliest documented sample recorded in South Africa in May 2020; and Gamma, which had samples first documented in Brazil in November 2020.

To be designated a VOC, a variant must meet the definition of a variant of interest (VOI).

This includes a variant:

  • with genetic changes that are predicted or known to affect virus characteristics such as transmissibility, disease severity, immune escape, diagnostic or therapeutic escape; AND
  • identified to cause significant community transmission or multiple COVID-19 clusters, in multiple countries with increasing relative prevalence alongside increasing number of cases over time, or other apparent epidemiological impacts to suggest an emerging risk to global public health.

Currently designated variants include Eta, Iota, Kappa and Lambda.

To be ‘promoted’ to a VOC, the variant needs to have shown to be associated with one or more of the following changes at a degree of global public health significance:

  • Increase in transmissibility or detrimental change in COVID-19 epidemiology; OR
  • Increase in virulence or change in clinical disease presentation; OR
  • Decrease in effectiveness of public health and social measures or available diagnostics, vaccines, therapeutics.

Will Delta outcompete the other variants of concern?

While the Delta variant continues to spread globally, where does this leave other variants of concern such as Alpha, Beta and Gamma? Will Delta eventually outcompete the others?

Dr. Francesca Di Giallonardo, a virologist at The Kirby Institute at UNSW Sydney suggests this could potentially be the case.

“Most probably yes, as this is a common process in natural selection and immune escape. However, the timeline and characteristics of variant replacement may vary between different geographic regions, particularly in those isolated by border closures.

“Prediction of infectivity of new variants is not trivial. In general, viruses increase their transmissibility. That means that the most ‘successful in transmission’ variant outcompetes other variants. However, it’s almost impossible to predict which variant will win the race and when we have reached the point of most transmissible,” said Dr. Di Giallonardo.

According to Dr. Di Giallonardo, increased viral fitness is characterized by natural selection events. This means if variant B has replaced variant A, it must be fitter by definition, as B has outcompeted A. Most likely, variant B is antigenically distinct, which means it has escaped immune pressure.

“Such variants will keep emerging as immune selection is increased due to more people being vaccinated or infected,” said Dr. Di Giallonardo.

Are fitter variants likely to cause severe disease?

If a variant outcompetes other variants, does that suggest it can cause more severe disease in humans or is that not the case? Dr. Di Giallonardo said this assumption was not necessarily correct.

“New variants are fitter such that their replication and transmission capacity is better compared to the previous variants. There are numerous emerging variants that are region-specific and are seemingly not causing more severe disease.

“However, constant monitoring of such variants is crucial for identifying those that do indeed cause more severe disease. Thus, the WHO and CDC (Centers for Disease Control and Prevention) classify new emerging variants with a potential for increased severity as variants of concern and variants of interest.”

What is our best line of defense?

“Vaccinate, vaccinate, and vaccinate,” said Dr. Di Giallonardo. “We have a plethora of global data on the efficacy of the different vaccines. We know how effective they are in preventing severe disease and reducing transmission.”

Virologist, Dr. Chantelle Ahlenstiel at the Kirby Institute said in terms of the mRNA vaccines such as Pfizer and Moderna, the sequences in the current vaccines—that tell the body how to make a specific type of virus spike protein—can be easily changed to match the newly emerging virus variant spike protein and could therefore provide protection against those too. Additionally, mRNA vaccines have the potential to be manufactured rapidly and inexpensively.

Surveillance is also critical for understanding which variants are circulating, as early identification will allow for rapid evaluation of vaccine efficacy.

“Surveillance means collecting data on the number of infections present for different virus variants, their geographic spread and the associated disease severity. Such data is essential to better understand how variants spread across the states, country, and globally. We have more data for COVID-19 than for any other viral disease, thanks to systems which were already in place such as nexstrain.org, GISAID (Global Initiative on Sharing Avian Influenza Data),” said Dr. Di Giallonardo.

“Surveillance requires a tremendous amount of work and the global efforts have been great.”

Provided by University of New South Wales

Source: Medical Xpress

What should you eat after you’ve been on antibiotics? And can probiotics and prebiotics get your gut back to normal?

Woman hold bowl of yoghurt and fruit and puts spoon to her mouth.

Antibiotics treat infections caused by bacteria. But they can also destroy the good bacteria in your gut. For some people, this results in an upset stomach and diarrhoea.

One UK review of the research looked at changes in gut bacteria after antibiotics commonly prescribed for respiratory and urinary tract infections found that after treatment, the numbers and diversity in bacteria types rapidly declines.

It also found some types of “bad” microorganisms increased while some “good” ones decreased.

For most people, once antibiotic treatment was stopped, the gut bacteria recover to some degree. But other studies suggest some antibiotics can have long-lasting effects on the balance of microorganisms.

It’s important to use antibiotics only when needed, and definitely not for viral infections, because antibiotics can’t kill viruses such as the common cold or COVID-19.

So what should you eat after a course of antibiotics? You might have heard of probiotics and prebiotics, but what are they, and what evidence is there to show they’re beneficial?

Probioitcs contain ‘good gut bacteria’

Probiotics are foods, typically yoghurts and yoghurt drinks, that contain “good gut bacteria”: live microorganisms that can recolonise the gut or improve your gut health.

To be called a probiotic, they must be able to resist stomach acid and digestive processes, and then be able adhere to the gut walls and grow, while not causing any issues for the gut wall. They must also be tested for safety and efficacy in controlled trials.

To be called a probiotic, the dose of microorganisms needs to be sufficient to help restore the “good” bacteria, by elbowing out the “bad bacteria”.

Most yoghurts contain “good bacteria” but not all can survive the acidity of the stomach acid or the bacteria won’t grow in the bowel, so there is no probiotic benefit.

For probiotics to exert these beneficial effects, they not only have to make it to the large bowel, but once there they need the right fuel to help them grow well. That’s where prebiotics come into play – but more on them shortly.

What does the science say about probiotics?

Probiotics are widely promoted as being good for your overall health. The science on that has been mixed, but it does suggest people who are likely to get diarrhoea after antibiotics may benefit from consuming them.

One review of the evidence found probiotics may be useful for those at high risk of antibiotic-associated diarrhoea, such as the elderly and people in hospital.

Woman in supermarket looks at the packaging of a yoghurt container.
Most yoghurts contain good bacteria but can’t survive the acidity of the stomach. Shutterstock

The review found side effects were common when taking antibiotics and include taste disturbances, nausea, abdominal cramping, soft stools, fever and flatulence.

But people taking probiotics reported fewer side effects, suggesting they may be helpful in countering some of the side effects.

So what are prebiotics?

Prebiotics are compounds that help beneficial gut microorganisms grow and survive.

Prebiotic foods contain complex carbohydrates that can’t be digested and dietary fibres that resist digestive processes in the stomach and small intestine.

They pass undigested into the large bowel where they are fermented by the healthy “good” bacteria.

To be called a prebiotic, they need to undergo the processes above, and be shown in clinical trials to selectively improve the microorganism composition in the gut.

Not all dietary fibres are prebiotic. Common ones include complex carbohydrates called fructo-oligosaccharides, inulin and resistant starch.

You can find foods at the supermarket with added prebiotics, but non-digestible carbohydrates occur naturally in many everyday foods, including:

  • grains: barley, rye bread, rye crackers, pasta, gnocchi, couscous, wheat bran, wheat bread, oats
  • legumes: chickpeas, lentils, red kidney beans, baked beans, soybeans
  • vegetables: artichokes, asparagus, beetroot, chicory, fennel bulb, garlic, green peas, leek, onion, shallots, spring onion, snow peas, sweetcorn, savoy cabbage
  • fruit: nectarines, white peaches, persimmon, tamarillo, watermelon, rambutan, grapefruit, pomegranate, dates, figs
  • nuts: cashews, pistachios.
Large bowl of mixed bean salad.
Prebiotics can be found in a range of foods, including legumes. Shutterstock

Additional sources of resistant starch include under-ripe bananas, cooked and cooled rice, cornflour, cooked and cooled potatoes.

For babies, breast milk is naturally rich in oligosaccharides.

So who should have them?

Prebiotic foods are good for everyone, contain a range of nutrients and help promote a healthy bacterial gut environment.

The benefits of probiotics for a range of health conditions are unclear – they’re likely to be small, and depend on what is being taken and the underlying health issues.

But people at high risk of diarrhoea after antibiotics may benefit from consuming probiotic – as well as prebiotic – foods daily.

There is also emerging evidence that combining specific probiotics and prebiotics can increase the beneficial effects of both. Both the pro- and prebiotics could be added to the one food, termed a “synbiotic”, or they could be from separate sources but eaten together.

When it comes to antibiotics, the bottom line is only take them when prescribed for bacterial infections. Take them according to instructions from the manufacturer, your pharmacist and your doctor.

Written by: Clare Collins, Laureate Professor in Nutrition and Dietetics, University of Newcastle

Originally published on The Conversation

Higher Levels of Omega-3 in the Blood Increase Life Expectancy by Almost Five Years

Omega-3 Food Sources

A 1% increase in this substance in the blood is associated with a change in mortality risk similar to that of quitting smoking.

Levels of omega-3 fatty acids in the blood are as good a predictor of mortality from any cause as smoking, according to a study involving the Hospital del Mar Medical Research Institute (IMIM), in collaboration with The Fatty Acid Research Institute in the United States and several universities in the United States and Canada. The study, published in The American Journal of Clinical Nutrition, used data from a long-term study group, the Framingham Offspring Cohort, which has been monitoring residents of this Massachusetts town, in the United States, since 1971.

Researchers have found that omega-3 levels in blood erythrocytes (the so-called red blood cells) are very good mortality risk predictors. The study concludes that “Having higher levels of these acids in the blood, as a result of regularly including oily fish in the diet, increases life expectancy by almost five years,” as Dr. Aleix Sala-Vila, a postdoctoral researcher in the IMIM’s Cardiovascular Risk and Nutrition Research Group and author of the study, points out. In contrast, “Being a regular smoker takes 4.7 years off your life expectancy, the same as you gain if you have high levels of omega-3 acids in your blood,” he adds.

2,200 people monitored over eleven years

The study analyzed data on blood fatty acid levels in 2,240 people over the age of 65, who were monitored for an average of eleven years. The aim was to validate which fatty acids function as good predictors of mortality, beyond the already known factors. The results indicate that four types of fatty acids, including omega-3, fulfill this role. It is interesting that two of them are saturated fatty acids, traditionally associated with cardiovascular risk, but which, in this case, indicate longer life expectancy. “This reaffirms what we have been seeing lately,” says Dr. Sala-Vila, “not all saturated fatty acids are necessarily bad.” Indeed, their levels in the blood cannot be modified by diet, as happens with omega-3 fatty acids.

These results may contribute to the personalization of dietary recommendations for food intake, based on the blood concentrations of the different types of fatty acids. “What we have found is not insignificant. It reinforces the idea that small changes in diet in the right direction can have a much more powerful effect than we think, and it is never too late or too early to make these changes,” remarks Dr. Sala-Vila.  

The researchers will now try to analyze the same indicators in similar population groups, but of European origin, to find out if the results obtained can also be applied outside the United States. The American Heart Association recommends eating oily fish such as salmon, anchovies or sardines twice a week because of the health benefits of omega-3 acids.

Reference: “Using an erythrocyte fatty acid fingerprint to predict risk of all-cause mortality: the Framingham Offspring Cohort” by Michael I McBurney, Nathan L Tintle, Ramachandran S Vasan, Aleix Sala-Vila and William S Harris, 16 June 2021, The American Journal of Clinical Nutrition.
DOI: 10.1093/ajcn/nqab195

Provided by: IMIM (HOSPITAL DEL MAR MEDICAL RESEARCH INSTITUTE)

Source: SciTechDaily

Less Popular Blood Pressure Medication Is the (Slightly) Safer Choice

Blood Pressure Device

Two types of drugs that are recommended as a first treatment for patients with high blood pressure were found equally effective in improving cardiovascular outcomes, but the more popular type causes slightly more side effects, finds a multinational observational study led by researchers at Columbia University Vagelos College of Physicians and Surgeons.

The study, which analyzed claims and electronic health data from millions of patients worldwide, is the largest to compare the safety and efficacy of angiotensin-converting enzyme (ACE) inhibitors and angiotensin receptor blockers (ARBs), two commonly prescribed antihypertensive drugs.

“Physicians in the United States and Europe overwhelmingly prescribe ACE inhibitors, simply because the drugs have been around longer and tend to be less expensive than ARBs,” says George Hripcsak, MD, the Vivian Beaumont Allen Professor and chair of biomedical informatics at Columbia University Vagelos College of Physicians and Surgeons and senior author of the study.

“But our study shows that ARBs are associated with fewer side effects than ACE inhibitors. The study focused on first-time users of these drugs. If you’re just starting drug therapy for hypertension, you might consider trying an ARB first. If you’re already taking an ACE inhibitor and you’re not having any side effects, there is nothing that we found that would indicate a need for a change.”

The study was published online in Hypertension.

Narrowing Down Choices

Once a physician decides to prescribe medication to control a patient’s high blood pressure, the next decision — which one to choose — is complicated.

“U.S. and European hypertension guidelines list 30 medications from five different drug classes as possible choices, yet there are very few head-to-head studies to help physicians determine which ones are better,” Hripcsak says. “In our research, we are trying to fill in this information gap with real-world observational data.”

ACE inhibitors and ARBs are among the choices, and they have a similar mechanism of action. Both reduce the risk of stroke and heart attacks, though it’s known that ACE inhibitors are associated with increased risk of cough and angioedema (severe swelling in the face and airways).

“We wanted to see if there were any surprises — were both drug classes equally effective, and were ARBs producing any unexpected side effects when used in the real world?” Hripcsak says. “We’re unlikely to see head-to-head clinical trials comparing the two since we are reasonably sure that both are effective.”

Electronic Health Records Provide Answer

The researchers instead turned to large databases to answer their questions. They analyzed insurance claims and electronic health records from approximately 3 million patients in Europe, Korea, and the United States who were starting antihypertensive treatment with either an ACE inhibitor or an ARB.

Data from electronic health records and insurance claims are challenging to use in research. They can be inaccurate, incomplete, and contain information that biases the results. So the researchers employed a variety of cutting-edge mathematical techniques developed by the Observational Health Data Science and Informatics (OHDSI) collaborative network to dramatically reduce bias and balance the two treatment groups as if they had been enrolled in a prospective study.

Using this approach, the researchers tracked four cardiovascular outcomes — heart attack, heart failure, stroke, and sudden cardiac death — and 51 adverse events in patients after they started antihypertensive treatment.

The researchers found that the vast majority of patients — 2.3 million — were prescribed an ACE inhibitor. There were no significant differences between the two drug classes in reducing the risk of major cardiovascular complications in people with hypertension. Patients taking ACE inhibitors had a higher risk of cough and angioedema, but the study also found they had a slightly higher risk of pancreatitis and gastrointestinal bleeding.

“Our study largely confirmed that both antihypertensive drug classes are similarly effective, though ARBs may be a little safer than ACE inhibitors,” Hripcsak says. “This provides that extra bit of evidence that may make physicians feel more comfortable about prescribing ARBs versus ACE inhibitors when initiating monotherapy for patients with hypertension. And it shows that large-scale observational studies such as this can offer important insight in choosing among different treatment options in the absence of large randomized clinical trials.”

Reference: “Comparative first-line effectiveness and safety of angiotensin converting enzyme inhibitors and angiotensin receptor blockers: a multinational cohort study” 26 July 2021, Hypertension.
DOI: 10.1161/HYPERTENSIONAHA.120.16667

Additional authors are RuiJun Chen (Geisinger Health), Marc Suchard (University of California Los Angeles), Harlan Krumholz (Yale University), Martijn Schuemie (Janssen Research and Development), Steven Shea (Columbia), Jon Duke (Georgia Tech College of Computing), Nicole Pratt (University of South Australia), Christian Reich (OHDSI), David Madigan (Northeastern University), Seng Chan You (Ajou University School of Medicine), and Patrick Ryan (Janssen).

The study was funded by the National Institutes of Health, National Science Foundation, and the Ministries of Health & Welfare and of Trade, Industry & Energy, Republic of Korea.

Provided by: COLUMBIA UNIVERSITY IRVING MEDICAL CENTER

Source: SciTechDaily

Extending Human Lifespans: Using Artificial Intelligence To Find Anti-Aging Chemical Compounds

Artificial Intelligence Artist's Concept

The University of Surrey has built an artificial intelligence (AI) model that identifies chemical compounds that promote healthy aging — paving the way towards pharmaceutical innovations that extend a person’s lifespan.

In a paper published by Nature Communication’s Scientific Reports, a team of chemists from Surrey built a machine learning model based on the information from the DrugAge database to predict whether a compound can extend the life of Caenorhabditis elegans — a translucent worm that shares a similar metabolism to humans. The worm’s shorter lifespan gave the researchers the opportunity to see the impact of the chemical compounds.

The AI singled out three compounds that have an 80 percent chance of increasing the lifespan of elegans:

  • flavonoids (anti-oxidant pigments found in plants that promote cardiovascular health),
  • fatty acids (such as omega 3), and
  • Organooxygens (compounds that contain carbon to oxygen bonds, such as alcohol).

Sofia Kapsiani, co-author of the study and final year undergraduate student at the University of Surrey, said:

“Ageing is increasingly being recognized as a set of diseases in modern medicine, and we can apply the tools of the digital world, such as AI, to help slow down or protect against aging and age-related diseases. Our study demonstrates the revolutionary ability of AI to aid the identification of compounds with anti-aging properties.”

Dr. Brendan Howlin, lead author of the study and Senior Lecturer in Computational Chemistry at the University of Surrey, said:

“This research shows the power and potential of AI, which is a specialty of the University of Surrey, to drive significant benefits in human health.”

Reference: “Random forest classification for predicting lifespan-extending chemical compounds” by Sofia Kapsiani and Brendan J. Howlin, 5 July 2021, Scientific Reports.
DOI: 10.1038/s41598-021-93070-6

Provided by: UNIVERSITY OF SURREY

Source SciTechDaily

Create your website with WordPress.com
Get started
%d bloggers like this: