NEW ANALYSIS LINKS TREE HEIGHT TO CLIMATE

From the FMS Global News Desk of Jeanne Hambleton Released: 14-Aug-2014   Citations Ecology Source Newsroom: University of Wisconsin-Madison

Newswise — MADISON, Wis. — What limits the height of trees? Is it the fraction of their photosynthetic energy they devote to productive new leaves? Or is it their ability to hoist water hundreds of feet into the air, supplying the green, solar-powered sugar factories in those leaves?

Both factors — resource allocation and hydraulic limitation — might play a role, and a scientific debate has arisen as to which factor (or what combination) actually sets maximum tree height, and how their relative importance varies in different parts of the world.

In research to be published in the journal Ecology — and currently posted online as a preprint — Thomas Givnish, a professor of botany at the University of Wisconsin-Madison, attempts to resolve this debate by studying how tree height, resource allocation and physiology vary with climate in Victoria state, located in southeastern Australia. There, Eucalyptus species exhibit almost the entire global range in height among flowering trees, from 4 feet to more than 300 feet.

“Since Galileo’s time,” Givnish says, “people have wondered what determines maximum tree height: ‘Where are the tallest trees, and why are they so tall?’ Our study talks about the kind of constraints that could limit maximum tree height, and how those constraints and maximum height vary with climate.”

One of the species under study, Eucalyptus regnans — called mountain ash in Australia, but distinct from the smaller and unrelated mountain ash found in the U.S. — is the tallest flowering tree in the world. In Tasmania, an especially rainy part of southern Australia, the tallest living E. regnans is 330 feet tall. (The tallest tree in the world is a coastal redwood in northern California that soars 380 feet above the ground.)

Southern Victoria, Tasmania and northern California all share high rainfall, high humidity and low evaporation rates, underlining the importance of moisture supply to ultra-tall trees. But the new study by Givnish, Graham Farquhar of the Australian National University and others shows that rainfall alone cannot explain maximum tree height.

A second factor, evaporative demand, helps determine how far a given amount of rainfall will go toward meeting a tree’s demands. Warm, dry and sunny conditions cause faster evaporation from leaves, and Givnish and his colleagues found a tight relationship between maximum tree height in old stands in Australia and the ratio of annual rainfall to evaporation. As that ratio increased, so did maximum tree height.

Other factors — like soil fertility, the frequency of wildfires and length of the growing season — also affect tree height. Tall, fast-growing trees access more sunlight and can capture more energy through photosynthesis. They are more obvious to pollinators, and have potential to outcompete other species.

“Infrastructure” — things like wood and roots that are essential to growth but do not contribute to the production of energy through photosynthesis — affect resource allocation, and can explain the importance of the ratio of moisture supply to evaporative demand.

“In moist areas, trees can allocate less to building roots,” Givnish says. “Other things being equal, having lower overhead should allow them to achieve greater height.

“And plants in moist areas can achieve higher rates of photosynthesis, because they can open the stomata on their leaves that exchange gases with the atmosphere. When these trees intake more carbon dioxide, they can achieve greater height before their overhead exceeds their photosynthetic income.”

The constraints on tree height imposed by resource allocation and hydraulics should both increase in drier areas. But Givnish and his team wanted to know the importance of each constraint.

The scientists examined the issue by measuring the isotopic composition of carbon in the wood along the intense rainfall gradient in their study zone. If hydraulic limitation alone were to set maximum tree height, the carbon isotope composition should not vary because all trees should grow up to the point at which hydraulics retards photosynthesis. The isotopic composition should also remain stable if resource allocation alone sets maximum height, because resource allocation does not directly affect the stomata.

But if both factors limit tree height, the heavier carbon isotopes should accumulate in moister areas where faster photosynthesis (enhanced by wide-open stomata) can balance the costs of building more wood in taller trees. Givnish, Farquhar and their colleagues found exactly that, implying that hydraulic limitation more strongly constrains maximum tree height under drier conditions, while resource allocation more strongly constrains height under moist conditions.

Most studies of tree height have focused on finding the tallest trees and explaining why they live where they do, Givnish says.

“This study was the first to ask, ‘How does the maximum tree height vary with the environment, and why?’”

WIRELESS SENSORS AND FLYING ROBOTS: A WAY TO MONITOR DETERIORATING BRIDGES

From the FMS Global News Desk of Jeanne Hambleton  Released: 15-Aug-2014
Source Newsroom:
Tufts University

Newswise — MEDFORD/SOMERVILLE, Mass. – As a recent report from the Obama administration warns that one in four bridges in the United States needs significant repair or cannot handle automobile traffic, Tufts University engineers are employing wireless sensors and flying robots that could have the potential to help authorities monitor the condition of bridges in real time.

Today, bridges are inspected visually by teams of engineers who dangle beneath the bridge on cables or look up at the bridge from an elevated work platform. It is a slow, dangerous, expensive process and even the most experienced engineers can overlook cracks in the structure or other critical deficiencies.

A New Monitoring System for Bridges

In the detection system being developed by Babak Moaveni, an assistant professor of civil and environmental engineering at Tufts School of Engineering, smart sensors are attached permanently to bridge beams and joints. Each sensor can continuously record vibrations and process the recorded signal. Changes in the vibration response can signify damage, he says.

Moaveni, who received a grant from the National Science Foundation (NSF) for his research, is collaborating with Tufts Assistant Professor of Electrical and Computer Engineering Usman Khan to develop a wireless system that would use autonomous flying robots (quad-copters) to hover near the sensors and collect data while taking visual images of bridge conditions. The drone-like robots would transmit data to a central collection point for analysis. Khan received a $400,000 Early Career Award from the NSF earlier this year to explore this technology, which requires addressing significant navigational and communications challenges before it could be a reliable inspection tool.

The recent Obama administration report that analyzed the condition of the transportation infrastructure, points across the country out that 25 percent of the approximately 600,000 bridges are in such a poor state that they are incapable of handling daily automobile traffic. In Massachusetts, more than 50 percent of the 5,136 bridges in use are deficient, the report says.

Moaveni and Khan’s work could help monitor bridges and identify those that are at risk more accurately than current methods. Once installed, the sensors would provide information about the condition of bridges that cannot be obtained by visual inspection alone and would allow authorities to identify and focus on bridges that need immediate attention.

Moaveni installed a network of 10 wired sensors in 2009 on a 145-foot long footbridge on Tufts’ Medford/Somerville campus. In 2011, Moaveni added nearly 5,000 pounds of concrete weights on the bridge deck to simulate the effects of damage on the bridge—a load well within the bridge’s limits. Connected by cables, the sensors recorded readings on vibration levels as pedestrians walked across the span before and after installation of the concrete blocks. From the changes in vibration measurements, Moaveni and his research team could successfully identify the simulated damage on the bridge, validating his vibration-based monitoring framework.

A major goal of his research, Moaveni says, is to develop computer algorithms that can automatically detect damage in a bridge from the changes in its vibration measurements. His work is ongoing.

“Right now, if a bridge has severe damage, we are pretty confident we can detect that accurately. The challenge is building the system so it picks up small, less obvious anomalies.”

Tufts University School of Engineering Located on Tufts’ Medford/Somerville campus, the School of Engineering offers a rigorous engineering education in a unique environment that blends the intellectual and technological resources of a world-class research university with the strengths of a top-ranked liberal arts college.

Close partnerships with Tufts’ excellent undergraduate, graduate and professional schools, coupled with a long tradition of collaboration, provide a strong platform for interdisciplinary education and scholarship.

The School of Engineering’s mission is to educate engineers committed to the innovative and ethical application of science and technology in addressing the most pressing societal needs, to develop and nurture twenty-first century leadership qualities in its students, faculty, and alumni, and to create and disseminate transformational new knowledge and technologies that further the well-being and sustainability of society in such cross-cutting areas as human health, environmental sustainability, alternative energy, and the human-technology interface.

SALT CONTRIBUTES TO 1,650,000 DEATHS GLOBALLY EACH YEAR

From the FMS Global News Desk of Jeanne Hambleton  Posted on August 13, 2014                              By Stone Hearth News Eureka Alert

BOSTON — More than 1.6 million cardiovascular-related deaths per year can be attributed to sodium consumption above the World Health Organization’s recommendation of 2.0g (2,000mg) per day, researchers have found in a new analysis evaluating populations across 187 countries. The findings were published in the August 14 issue of The New England Journal of Medicine.

“High sodium intake is known to increase blood pressure, a major risk factor for cardiovascular diseases including heart disease and stroke,” said first and corresponding author Dariush Mozaffarian, M.D.,

Dr.P.H., dean of the Friedman School of Nutrition Science and Policy at Tufts University, who led the research while at the Harvard School of Public Health. “However, the effects of excess sodium intake on cardiovascular diseases globally by age, sex, and nation had not been well established.”

The researchers collected and analyzed existing data from 205 surveys of sodium intake in countries representing nearly three-quarters of the world’s adult population, in combination with other global nutrition data, to calculate sodium intakes worldwide by country, age, and sex. Effects of sodium on blood pressure and of blood pressure on cardiovascular diseases were determined separately in new pooled meta-analyses, including differences by age and race. These findings were combined with current rates of cardiovascular diseases around the world to estimate the numbers of cardiovascular deaths attributable to sodium consumption above 2.0g per day.

The researchers found the average level of global sodium consumption in 2010 to be 3.95g per day, nearly double the 2.0g recommended by the World Health Organization. All regions of the world were above recommended levels, with regional averages ranging from 2.18g per day in sub-Saharan Africa to 5.51g per day in Central Asia. In their meta-analysis of controlled intervention studies, the researchers found that reduced sodium intake lowered blood pressure in all adults, with the largest effects identified among older individuals, blacks, and those with pre-existing high blood pressure.

“These 1.65 million deaths represent nearly one in 10 of all deaths from cardiovascular causes worldwide. No world region and few countries were spared,” added Mozaffarian, who chairs the Global Burden of Diseases, Nutrition, and Chronic Disease Expert Group, an international team of more than 100 scientists studying the effects of nutrition on health and who contributed to this effort.

“These new findings inform the need for strong policies to reduce dietary sodium in the United States and across the world.”

In the United States, average daily sodium intake was 3.6g, 80 percent higher than the amount recommended by the World Health Organization. [The federal government's Dietary Guidelines for Americans recommend limiting intake of sodium to no more than 2,300mg (2.3g) per day.] The researchers found that nearly 58,000 cardiovascular deaths each year in the United States could be attributed to daily sodium consumption greater than 2.0g. Sodium intake and corresponding health burdens were even higher in many developing countries.

“We found that four out of five global deaths attributable to higher than recommended sodium intakes occurred in middle- and low-income countries,” added John Powles, M.B., B.S., last author and honorary senior visiting fellow in the department of public health and primary care at the University of Cambridge.

“Programs to reduce sodium intake could provide a practical and cost effective means for reducing premature deaths in adults around the world.”

The authors acknowledge that their results utilize estimates based on urine samples, which may underestimate true sodium intakes. Additionally, some countries lacked data on sodium consumption, which was estimated based on other nutritional information; and, because the study focuses on cardiovascular deaths, the findings may not reflect the full health impact of sodium intake, which is also linked to higher risk of nonfatal cardiovascular diseases, kidney disease and stomach cancer, the second most-deadly cancer worldwide.

This research was supported by a grant from the Bill and Melinda Gates Foundation.

Mozaffarian, D; Fahimi, S; Singh, G; Micha, R; Khatibzadeh, S; Engell, R; Lim, S; Goodarz, D; Ezzati, M; and Powles, J. “Global sodium consumption and death from cardiovascular causes.” N Engl J Med 2014. 371:7, 624-634. DOI: 10.1056/NEJMoa130412

About the Friedman School of Nutrition Science and Policy

The Gerald J. and Dorothy R. Friedman School of Nutrition Science and Policy at Tufts University is the only independent school of nutrition in the United States. The school’s eight degree programs – which focus on questions relating to nutrition and chronic diseases, molecular nutrition, agriculture and sustainability, food security, humanitarian assistance, public health nutrition, and food policy and economics – are renowned for the application of scientific research to national and international policy.

Back tomorrow – Jeanne

Link | Posted on by | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

PATIENTS SELF-PRESCRIBE ‘CANCER-PREVENTING’ ASPIRIN AS PHARMACY SALES SOAR

PATIENTS SELF-PRESCRIBE ‘CANCER-PREVENTING’ ASPIRIN AS PHARMACY SALES SOAR

From the FMS Global News Desk of Jeanne Hambleton      3 August  2014  By Caroline Price Pulse Today

BREAKING NEWS

 

Exclusive Pharmacies have reported big hikes in aspirin sales in the past week after UK academics called for people in late-middle age to start taking daily doses of the drug to prevent stomach and bowel cancer, Pulse has learnt.

Health retailer Superdrug reported a doubling in the amount of low-dose aspirin sold last week in its stores, recording a 229% increase in sales on the preceding week.

The figures came as UK experts claimed the benefits of taking a low dose of aspirin daily to prevent stomach and bowel cancer outweigh any risks for most people aged 50-65.

The researchers had warned there were still some doubts regarding the evidence – in particular over what dose should be taken and for how long – and advised people to consult their GP before choosing to self-prescribe aspirin.

But one Superdrug store in Bolton last week reported a massive 500% increase in sales after the announcement, a finding reflected by a big jump in national sales of 75 mg aspirin across Superdrug stores nationally compared with the previous week – and a 400% increase in the London region.

A spokesperson for Superdrug told Pulse: ‘Aspirin sales were up 229% nationally week on week, on aspirin 75mg last week in comparison to the week before. In London sales were up 400% week-on-week.’

Elsewhere independent chain LloydsPharmacy told Pulse they had noticed a smaller but still marked increase in sales nationally, with a 27% increase in the volume of sales compared with the same week last year, and a 36% increase in volume compared with the preceding week.

Boots declined to share information on its aspirin sales while Day Lewis, Morrisons and Whitworth said they had not seen a big change in the overall pattern of sales.

GP leaders stressed there is still not enough evidence to recommend anyone takes aspirin routinely for cancer prevention – but said it was more appropriate for the public to consult their local pharmacist about the pros and cons, rather than visiting over-stretched GPs.

Dr Andrew Green, chair of the GPC clinical and prescribing subcommittee, said: ‘I would be encouraging people to have a chat with their pharmacist about it rather than their GP. Whether someone should be taking aspirin or not is well within the pharmacists’ competence.’

He added: ‘The advice from a GP I would suggest is at the moment is we don’t have enough evidence to recommend it for everybody. If a patient wants to disregard that and take it then they should still get some advice – but the pharmacist can advise them if there is anything in their past medical history or their current prescriptions that means they shouldn’t take aspirin.’

Dr Richard West agreed that while people should get advice before deciding to take aspirin, consulting a GP may not be necessary.

Dr West said: ‘It’s a difficult balance – there are undoubtedly some risks from taking it and therefore it is worth discussing it with an appropriate healthcare professional beforehand.

‘However, as we know general practice is under a lot of pressure at the moment and therefore if a pharmacist felt capable of giving that advice then I think that would be perfectly appropriate.’

A spokesperson for the Royal Pharmaceutical Society said: ‘Pharmacists are well practiced in dealing with requests for treatments following a big media story. The links between cancer prevention and aspirin are not new but as yet haven’t lead to a change in indication or licence of aspirin.

‘Although aspirin is often portrayed as a wonder drug, it can cause serious harms, especially in people with pre-existing conditions such as stomach ulcers.’

 

ASPIRIN FOR PRIMARY PREVENTION IN DIABETES ‘SHOULD BE RESTRICTED’

From the FMS Global News Desk of Jeanne Hambleton 9 May 2013             By Caroline Price Pulse Today

 

Daily low-dose aspirin treatment does not prevent cardiovascular events or death in people with type 2 diabetes and no previous cardiovascular disease (CVD), and may even increase the risk of coronary heart disease (CHD) in female patients, shows a large cohort study.

The study

Researchers analysed the outcomes of 18,646 men and women with type 2 diabetes and no CVD history, aged between 30 and 80 years, over an average of four years beginning in 2006, using data from the Swedish National Diabetes Registry. In all, 4,608 patients received low-dose (75 mg/day) aspirin treatment while 14,038 patients received no aspirin treatment, giving 69,743 aspirin person-years and 102,754 non-aspirin person-years of follow-up.

The findings

Aspirin treatment was not associated with any benefit in terms of cardiovascular outcomes or mortality, after propensity score and multivariable adjustment. Aspirin-treated and non-aspirin-treated groups had equivocal risks of the outcomes non-fatal or fatal CVD, fatal CVD, fatal CHD, non-fatal or fatal stroke, fatal stroke and total mortality.

Patients who received aspirin had a significant 19% increased risk of non-fatal or fatal CHD; further analysis stratifying the group by gender showed this was driven by a significant 41% increased risk in women, while there was no increased risk in men. Women also had a 28% increased risk of fatal or non-fatal CVD.

There was also a borderline significant 41% increase in risk of non-fatal or fatal total haemorrhage with aspirin, but this association became weaker when broken down by gender.

Risks of cerebral or ventricular bleeding did not differ between groups, but aspirin use was associated with a significant 64% increased risk of ventricular ulcer, driven by a 2.3-fold increased in women, while no increased risk was found in men.

Furthermore, the effects of aspirin on these endpoints were similar in patients with high estimated CV risk (five-year risk 15% or higher) and those with low estimated CV risk (five-year risk below 15%).

What this means for GPs.

The results support current guidance from the European Society of Cardiology and the European Association for the Study of Diabetes that do not recommend primary prevention with aspirin in patients with diabetes, but conflict with the NICE type 2 diabetes guidelines, which recommend primary prevention with 75 mg/day aspirin in patients aged 50 years or older if their blood pressure is below 145/90 mm/Hg and in patients younger than 50 who have another significant cardiovascular risk factor.

The authors conclude: ‘The present study shows no association between aspirin use and beneficial effects on risks of CVD or mortality in patients with diabetes and no previous CVD and supports the trend towards a more restrictive use of aspirin in these patients, also underlined by the increased risk of ventricular ulcer associated with aspirin.’

 

GPS TOLD TO REVIEW ASPIRIN USE IN PATIENTS WITH ATRIAL FIBRILLATION

From the FMS Global News Desk of Jeanne Hambleton 18 June 2014        By Caroline Price Pulse Today

 

GPs are to be tasked with reviewing all their patients with atrial fibrillation who are taking aspirin, under final NICE guidance published today that recommends anticoagulant therapy as the only option for stroke prevention in these patients.

The new guidance means GPs will need to start advising patients with atrial fibrillation who are on aspirin to stop taking it, and encourage them to take warfarin or one of the newer oral anticoagulants.

NICE said just over a fifth of the UK population with atrial fibrillation – around 200,000 patients – are currently on aspirin, many of whom should be able to be switched onto anticoagulation therapy of some sort.

GP leaders have warned that practices do not have the capacity to proactively call in patients, and suggested that changing management of this number of patients could only be achieved through incentive schemes such as enhanced services or the QOF.

But NICE advisors and CCG cardiology leads have claimed that GPs can do the reviews opportunistically over the coming year.

The final publication comes after it emerged the GPC had raised serious concerns over the complexity of the draft guidance – and warned CCGs would need to consider developing enhanced services to support GPs in delivering it.

Dr Andrew Green, chair of the GPC’s clinical and prescribing subcommittee, told Pulse GPs should feel they can refer patients on if they are not able to deal with all the changes as part of annual reviews.

Dr Green said: ‘I would expect GPs as part of their normal work to consider whether [atrial fibrillation] patients not on anticoagulation should be, in the light of the new guidance. If they should be, then the choice is between anticoagulation with warfarin or one of the newer agents, and if GPs do not feel they have the expertise or resources to do this properly, they have a duty to refer to someone who can.’

He added: ‘Commissioners need to predict this activity and may want to commission a service specifically for this which is more cost-effective than a traditional out-patient referral.’

Local GP leaders told Pulse practices would not take a systematic approach to reviewing and updating patients’ medications unless the work was specifically funded.

Dr Peter Scott, a GP in Solihull and chair of the GPC in West Midlands, said: ‘It’s not going to happen unless it’s resourced and incentivised as part of a DES or LES, or through the QOF – until then I don’t think a systematic approach to this will happen.’

But Dr Matthew Fay, a GP in Shipley, Yorkshire, and member of the NICE guidelines development group, acknowledged the workload concerns and said GPs should be advised to review patients opportunistically.

Dr Fay said: ‘I think it’s perfectly acceptable [to review patients opportunistically]. A lot of these patients who are at risk in this situation we will be reviewing because of their hypertension and other comorbidities, and those patients on aspirin should have that discussed at the next presentation.’

He added: ‘I think anticoagulation is an intimidating topic for clinicians – both in primary and secondary care. I would suggest one person in each practice one clinician is involved with the management of the anticoagulated patients – whether that’s keeping a check on them during the warfarin clinic or being the person who initiates the novel oral anticoagulants.

‘If GPs feel uncomfortable with [managing anticoagulation] then they should be approaching the CCG executive to say, “we need a service to provide expert support for this”. The CCG may choose to come up with an enhanced service – but then whoever is providing the service needs to make sure they are well versed in use of the latest anticoagulants.’

The new guidance says GPs must use the CHA2DS2-VASc score to assess patients’ stroke risk and advise any patients with a score of at least one (men) or two (women) to go onto anticoagulation therapy with warfarin, or another vitamin K antagonist, or with one of the novel oral anticoagulants (NOACs) dabigatran, apixaban or rivaroxaban.

It adds that aspirin should no longer be prescribed solely for stroke prevention to patients with atrial fibrillation.

The HAS-BLED score should be used to assess patients’ risk of bleeding as part of the decision over which anticoagulant to choose.

In the only major revision to the draft guidance, aspirin is no longer to be considered even as part of dual antiplatelet therapy for patients at particularly high bleeding risk, as this combination has now also been ruled out.

 

BENEFITS OF ASPIRIN A DAY FOR CANCER PREVENTION IN MIDDLE-AGED PEOPLE ‘OUTWEIGH HARMS

This story was published a few days ago

 

 

Posted in Global News | Tagged , , , , | Leave a comment

OWLS PROVIDES CLUES ON HOW HUMANS FOCUS ATTENTION

From the FMS Global News Desk of Jeanne Hambleton Embargoed: 11-Sep-2014 Source: Johns Hopkins University Citations Neuron, online-Sep-11-2014;

 

Newswise — Imagine a quarterback on the gridiron getting ready to pass the ball to a receiver. Suddenly, in charges a growling linebacker aiming to take him down. At what point does the quarterback abandon the throw and trigger evasive maneuvers?

A 1-pound owl might have some answers.

A Johns Hopkins University neuroscientist who works with barn owls is publishing his study about attention that reveals the rules and mechanisms for how the brain makes such decisions.

Shreesh Mysore, lead author of the paper to be published online Sept. 11 in the journal Neuron, and an assistant professor in the university’s Department of Psychological and Brain Sciences, said the scenarios represent two options: the “top-down” control of attention in which you choose what to focus on, and “bottom-up” control of attention in which physical stimuli in the world capture your attention by virtue of their properties.

It is competition among options – focus on the wide receiver or dodge the linebacker – that determines in a sliver of a second which situation requires immediate attention.

Mysore completed his research while he was a postdoctoral scholar at Stanford University. Eric Knudsen, a professor at the Stanford School of Medicine, is a senior author of the study. Mysore says the findings from this research provide insights into how the brain might choose what to focus on.

“The idea is that there is constant interplay and competition between these two kinds of influences,” said Mysore. “At any given moment, your brain has to pick from a vast variety of information, both top-down and bottom-up, and the brain runs a competition and picks a winner. To anthropomorphize this process, the brain basically says, ‘At this instant, I’m going to select this location in the world to direct my attention.’”

Learning more about how attention is controlled can help in the future treatment of such disorders as attention deficit disorder, autism and schizophrenia. 
As for barn owls, their acute hearing and laser-like gaze make them great test cases in research exploring spatial attention.

Using a visual projector and a pair of specialized earphones, the owls were presented with a series of computer-controlled images of dots and noise bursts. Electrodes just thicker than an average human hair were inserted into a portion of the owls’ brain called the optic tectum.

The tectum is a key hub in the midbrain of all vertebrate animals and is important for the control of spatial attention. Cells in different layers of the tectum have different jobs, with the surface layers being more vision-dominated, and the deeper layers processing information from multiple senses and driving body movement. All layers contain a map-like representation of the outside world.

After determining that brain cells in the tectum fired when the images and sounds appeared, the researchers then used two stimuli to measure which was more likely to dominate in the brain’s representation of the world.

“If you want to measure competition, you have to have things that are competing against each other,” said Mysore.

“So we used two stimuli, either two images or an image and a sound. Because objects with stronger physical properties tend to capture your attention behaviorally, we were looking for some signature of a ‘switch’ in the brain’s activity when one stimulus became just stronger than the other.”

This physical strength of a stimulus, called salience, was varied in their experiments by having visual dots loom at different speeds or by changing the loudness of sounds. An abrupt, switch-like change in the activity of tectal neurons is what they found when the owls were exposed to just these bottom-up or sensory stimuli, consistent with a previous study by Mysore.

“With these results as a basis, we went after the main goal of this study, which was to examine how top-down information influences competition and selection,” he said.

In an interesting twist, the study’s authors controlled where the owls intended to apply their focus. By delivering a tiny amount of current to a specific part of a region in the forebrain of owls called the gaze field, they essentially caused the animal to “want to” pay attention to a specific location in the world.

The authors found that this intention to pay attention to one particular stimulus had a powerful effect in that it nearly tripled the ability of the brain to determine which among all competing stimuli was the strongest. This ability could potentially make the animal that much better at correctly deciding which information is the most important at any instant.

In addition, using a computer model of the neurons in the tectum, they were able to provide an explanation for how top-down information may fine-tune the ability of the brain to make decisions about where to pay attention.

Mysore said that while much is known about how the brain processes sensory information, not as much is understood about how the brain performs stimulus 
competition to decide where to focus. This study provides important clues.

“One of my longstanding interests is to understand how specific circuits in the brain produce specific behaviors,” he said. “My hope is that by understanding neural computations and circuits underlying behavior in normal animals, we will be able to contribute meaningfully to the understanding of what has gone awry in disease states. This, to me, is one of the best approaches to developing effective therapeutics for psychiatric.”

 

CAN YOUR BLOOD TYPE AFFECT YOUR MEMORY?

From FMS Global News Desk of Jeanne Hambleton Embargoed: 10-Sep-2014 Source: American Academy of Neurology (AAN) Citations Neurology

 

Newswise — MINNEAPOLIS – People with blood type AB may be more likely to develop memory loss in later years than people with other blood types, according to a study published in the September 10, 2014, online issue of Neurology®, the medical journal of the American Academy of Neurology.

AB is the least common blood type, found in about 4 percent of the U.S. population. The study found that people with AB blood were 82 percent more likely to develop the thinking and memory problems that can lead to dementia than people with other blood types.

Previous studies have shown that people with type O blood have a lower risk of heart disease and stroke, factors that can increase the risk of memory loss and dementia.

The study was part of a larger study (the REasons for Geographic And Racial Differences in Stroke, or REGARDS Study) of more than 30,000 people followed for an average of 3.4 years. In those who had no memory or thinking problems at the beginning, the study identified 495 participants who developed thinking and memory problems, or cognitive impairment, during the study.

They were compared to 587 people with no cognitive problems.

People with AB blood type made up 6 percent of the group who developed cognitive impairment, which is higher than the 4 percent found in the population.

“Our study looks at blood type and risk of cognitive impairment, but several studies have shown that factors such as high blood pressure, high cholesterol and diabetes increase the risk of cognitive impairment and dementia,” said study author Mary Cushman, MD, MSc, of the University of Vermont College of Medicine in Burlington.

“Blood type is also related to other vascular conditions like stroke, so the findings highlight the connections between vascular issues and brain health. More research is needed to confirm these results.”

Researchers also looked at blood levels of factor VIII, a protein that helps blood to clot. High levels of factor VIII are related to higher risk of cognitive impairment and dementia. People in this study with higher levels of factor VIII were 24 percent more likely to develop thinking and memory problems than people with lower levels of the protein.

People with AB blood had a higher average level of factor VIII than people with other blood types.

The study was supported by the National Institute of Neurological Disorders and Stroke, National Institutes of Health, U.S. Department of Health and Human Services and National Heart, Lung, and Blood Institute.

The American Academy of Neurology, an association of 28,000 neurologists and neuroscience professionals, is dedicated to promoting the highest quality patient-centered neurologic care. A neurologist is a doctor with specialized training in diagnosing, treating and managing disorders of the brain and nervous system such as Alzheimer’s disease, stroke, migraine, multiple sclerosis, brain injury, Parkinson’s disease and epilepsy.

 

YOGIC BREATHING SHOWS PROMISE IN REDUCING SYMPTOMS OF POST-TRAUMATIC STRESS DISORDER

From the FMS Global News Desk of Jeanne Hambleton Released: 11-Sep-2014
Source: University of Wisconsin-Madison Citations Journal of Traumatic Stress

 

Newswise — MADISON, Wis. — One of the greatest casualties of war is its lasting effect on the minds of soldiers. This presents a daunting public health problem: More than 20 percent of veterans returning from the wars in Iraq and Afghanistan have post-traumatic stress disorder, according to a 2012 report by RAND Corp.

A new study from the Center for Investigating Healthy Minds at the Waisman Center of the University of Wisconsin-Madison offers hope for those suffering from the disorder. Researchers there have shown that a breathing-based meditation practice called Sudarshan Kriya Yoga can be an effective treatment for PTSD.

Individuals with PTSD suffer from intrusive memories, heightened anxiety, and personality changes. The hallmark of the disorder is hyperarousal, which can be defined as overreacting to innocuous stimuli, and is often described as feeling “jumpy,” or easily startled and constantly on guard.

Hyperarousal is one aspect of the autonomic nervous system, the system that controls the beating of the heart and other body functions, and governs one’s ability to respond to his or her environment. Scientists believe hyperarousal is at the core of PTSD and the driving force behind some of its symptoms.

Standard treatment interventions for PTSD offer mixed results. Some individuals are prescribed antidepressants and do well while others do not; others are treated with psychotherapy and still experience residual affects of the disorder.

Sudarshan Kriya Yoga is a practice of controlled breathing that directly affects the autonomic nervous system. While the practice has proven effective in balancing the autonomic nervous system and reducing symptoms of PTSD in tsunami survivors, it has not been well studied until now.

The CIHM team was interested in Sudarshan Yoga because of its focus on manipulating the breath, and how that in turn may have consequences for the autonomic nervous system and specifically, hyperarousal. Theirs is the first randomized, controlled, longitudinal study to show that the practice of controlled breathing can benefit people with PTSD.

“This was a preliminary attempt to begin to gather some information on whether this practice of yogic breathing actually reduces symptoms of PTSD,” says Richard J. Davidson, founder of CIHM and one of the authors of the study.

“Secondly, we wanted to find out whether the reduction in symptoms was associated with biological measures that may be important in hyperarousal.”

These tests included measuring eye-blink startle magnitude and respiration rates in response to stimuli such as a noise burst in the laboratory. Respiration is one of the functions controlled by the autonomic nervous system; the eye-blink startle rate is an involuntary response that can be used to measure one component of hyperarousal. These two measurements reflect aspects of mental health because they affect how an individual regulates emotion.

The CIHM study included 21 soldiers: an active group of 11 and a control group of 10. Those who received the one-week training in yogic breathing showed lower anxiety, reduced respiration rates and fewer PTSD symptoms.

Davidson would like to further the research by including more participants, with the end goal of enabling physicians to prescribe treatment based on the cognitive and emotional style of the individual patient.

“A clinician could use a ‘tool box’ of psychological assessments to determine the cognitive and emotional style of the patient, and thereby determine a treatment that would be most effective for that individual,” he says.

“Right now, a large fraction of individuals who are given any one type of therapy are not improving on that therapy. The only way we can improve that is if we determine which kinds of people will benefit most from different types of treatments.”

That assessment is critical. At least 22 veterans take their own lives every day, according to the U.S. Department of Veterans Affairs. Because Sudarshan Kriya Yoga has already been shown to increase optimism in college students, and reduce stress and anxiety in people suffering from depression, it may be an effective way to decrease suffering and, quite possibly, the incidence of suicide among veterans.

The study, published in the Journal of Traumatic Stress, was funded by a grant from the Disabled Veterans of America Charitable Service Trust and individual donors.

Back Wednesday. Jeanne

Link | Posted on by | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

See also jeannehambeton77.wordpress.com for even 

More Medical Troubles

THOUSANDS OF PATIENTS WRONGLY REMOVED FROM PRACTICE LISTS AS PART OF MANAGERS’ £85M SAVINGS DRIVE

From the FMS Global News Desk of Jeanne Hambleton   PULSE TODAY    September 2 2014 | By Jaimie Kaffash

Exclusive Thousands of patients have been forced to reregister with their GP due to an NHS cost-cutting programme that targets the very elderly and children for removal from practice lists.

A Pulse investigation reveals that nearly 12,000 patients have been forced to re-register with their GP since April 2013 after being removed from their GP’s list by managers.

GPs say the programme to check their practice lists is putting vulnerable patients at risk, with some missing vital check-ups as a result of not being on a GP list.

The figures – obtained under the Freedom of Information Act – are the first to show the results of the programme begun by NHS England last May to validate practice lists to reduce the number of so-called ‘ghost patients’ registered by GPs and save the NHS £85m.

They reveal that the re-registration rate for this programme is running at around 14% of those patients targeted, double the rate for previous programmes run by PCTs.

The programme specifically targets patients aged 100 years or over, children and others who do not attend their vaccination appointments and people living in homes with ‘apparent multiple occupancy’.

NHS England released guidance in May 2013 that said area teams were expected to engage in ‘regular proactive list management with general practices’ to ensure that practices are being paid for genuine patients and that those with 5% more registered patients than head of population ‘will be benchmarked against their achievement in reducing lists’

Results of the Pulse investigation, include:

•  Some 20 of the 25 area teams have begun work to validate GP lists since last year

•  Of the 10 able to provide figures, they have removed some 83,420 patients from lists so far

•  11,894 of these were subsequently forced to re-register with their GP as they were genuine patients, resulting in an error rate of 14%

•  Extrapolated across the country, this could mean that up to 35,000 patients could be removed from practice lists and be forced to re-register with their GP due to the programme

•   The Thames Valley area team had the worst rates, with re-register rates of more than 40% for its scheme, 32% for the Birmingham area team and 14% in Leicester and Lincolnshire.

GPs say that the list cleansing drive is causing distress to patients, including leading to angry scenes in waiting rooms, while others claim that patients are being removed without even being contacted.

Dr Louise Irvine, a GP in Lewisham, south east London, said that ‘loads and loads’ of her patients have been wrongly removed from her patient list. She added: ‘Patients were very distressed at being removed. Sometimes there are angry scenes at reception, which is a distressing situation for everyone.

‘We try to re-register them as quickly as possible to stop this interfering with their care or their ability to get an appointment, but it leads to a lot of time being spent by our receptionists who are already very busy.’

Dr Sanjeev Juneja, a GP in Rochester, Kent, said that a child has been targeted – because the three-year old failed to return a letter. ‘Vaccinations were missed as child was not listed,’ he said.

Dr Tony Grewal, medical secretary of Londonwide LMCs, said that NHS England’s targeting of patients in multi-occupancy residents were adversely affecting vulnerable patients from overseas.

He said: ‘It is rare a letter sent to somebody living in a multi occupancy residency will get to the right person.

‘Very often these people do not speak English as a first language, or do not understand the importance of a letter they get from NHS England. Those seem to be the higher proportion of the removals.

‘But they are also more likely to be a vulnerable group and it has been a concern for us.’

An NHS spokesperson said: ‘NHS England takes all possible steps it can to contact patients and minimise the number who need to re-register – but there will always be some circumstances where patients do not respond and at that point we have to assume that they have moved away from that address and are therefore not in reality receiving services from that GP. Patients can always re-register with that practice or another, it is therefore incorrect to say that they have been “wrongly removed”. ‘

Pulse reported last year that PCTs were taking part in list cleansing exercises, with 7% of patients that had to be removed under list-cleansing schemes having to re-register with their GP in 2012/13.

 

GHOST PATIENT DRIVE COMES BACK TO HAUNT GPS

From the FMS Global News Desk of Jeanne Hambleton 2 September 2014                By Jamie Kaffash PULSE TODAY

NHS England is going even further than PCTs in its scheme to remove patients from practice lists, causing distress and an administrative headache for patients and practices, finds Jaimie Kaffash

GP practices are at the centre of an unprecedented drive to remove so-called ‘ghost’ patients from their lists.

The rolling three-year programme is projected to save the NHS £85m, but a Pulse investigation reveals it has removed thousands of genuine patients, blocking them from accessing health services.Screen Shot 2014-09-15 at 20.10.45

 

 

 

 

 

 

 

 

Of the 4.8 million patients who have been reviewed already, 4.2% have been deleted from lists, but in some areas up to 40% of these patients were genuine and have had to re-register with their practice after discovering they had been removed.

NHS England says it takes ‘all possible steps’ to contact patients before removing them from GP lists, but GPs say they are targeting vulnerable groups, causing great distress for patients and causing some to miss key appointments as a result.

In one case, Pulse has learnt that students were removed from a GP’s list without first being contacted, after area team managers simply asked the local university whether they still resided there.

The programme began last summer, with NHS England publishing a national policy to target the very elderly, children and those in multiple occupancy homes to ensure that ‘list inflation is appropriately managed’.

NHS England’s guidance says area teams are expected to engage in ‘regular proactive list management with general practices’ to ensure that practices are being paid for genuine patients and that those with 5% more registered patients than head of population ‘will be benchmarked against their achievement in reducing lists’.

Three-year plan

Area teams have been instructed to validate all GP lists continuously over a one to three-year period, and they are making remarkable progress. Data obtained by Pulse under the Freedom of Information Act shows that 20 of the 25 area teams have begun work to validate GP lists since last year. Of the 10 able to provide figures, they have removed some 200,000 patients from lists so far.

But 14% of patients have subsequently re-registered with their GP, an error rate double that for previous PCT schemes – a Pulse investigation last year found 7% of patients under list-cleansing schemes had to re-register with their GP in 2012/13.

The Thames Valley area team had the worst rates, with re-register rates of more than 40% for its scheme, 32% for the Birmingham area team and 14% in Leicester and Lincolnshire. Extrapolated across the whole of England, this would mean around 35,000 patients eventually being wrongly removed from GP lists under this programme.

For GP practices in affected areas this is a huge problem. These reviews are leading to severe distress when patients find out they have been removed from their practice’s list – and GPs inevitably take the blame.

Dr Sally Whale, a GP in Ipswich, says: ‘We probably have several every month who find they have been removed from our list without their knowledge

Screen Shot 2014-09-15 at 20.05.20

 

‘They are usually confused and angry, as being told you are not registered when you need an appointment causes worry,’ she says.

Dr Louise Irvine, a GP in Lewisham, south east London, says that ‘loads and loads’ of patients have been wrongly removed from her patient list. She adds: ‘Patients were very distressed at being removed.

‘We try to re-register them as quickly as possible to stop this interfering with their care or their ability to get an appointment, but it leads to a lot of time being spent by our receptionists who are already very busy, trying to explain what happened and getting them to fill in the registration forms again.’

What GPs are saying

Sometimes there are angry scenes at reception, which is distressing for everyone.

The distress to the patient is the most significant part of this intervention.

The majority of patients removed recently seem to be women who have not had a smear test in the last five years.

Patients are often abusive when told they need to register first then get an appointment.

Angry patients

This tension can spill over into the practice itself, she adds: ‘We tend to get the blame. People do not realise that it is the health authority, not the practice that is doing this. Sometimes there are angry scenes at reception, which is a distressing situation for everyone.’

NHS England has told area teams to choose particular ‘cohorts’ to focus attention on. They include patients over the age of 100, addresses ‘with apparent multiple occupancy’ and patients who have missed an NHS appointment – for instance, for childhood or flu immunisations, or cytology.

But this approach is leading to some patients missing vital appointments and compromising care, say GPs.

Dr Tony Grewal, medical director of Londonwide LMCs – an area where 116,479 patients have been removed in a massive list-cleansing drive by the area team since last September – explains: ‘It is rare that a letter sent to somebody living in a multi-occupancy residency will get to the right person.

‘Very often these people don’t speak English as a first language, or don’t understand the importance of a letter they get from NHS England. Those seem to be the higher proportion of the removals. But they are also more likely to be a vulnerable group and it has been a concern for us.’

Dr Sanjeev Juneja, a GP in Rochester, Kent, said a three-year-old child on his list had been targeted by his area team because he failed to return a letter. ‘Vaccinations were missed as the child was not listed,’ he says.

Another GP, who wishes to remain anonymous, says that he found out a single mum with several children had been removed from his list after they were flagged as living in a house with ‘too many other people’.

He adds: ‘Unfortunately all of them had been under child protection at some point or other. My concern is that they could have been in a situation where the children could have been harmed, with no GP to pick that up.’

Burden of proof

As well as the effects on patient care, the programme is causing practices piles of administration to prove that patients are genuine.

NHS England sends two letters to suspected ‘ghost patients’ and if there is no response then a FP69 flag is put on the patient’s record in the GP’s computer system. A database of these flags is then sent to practices, which are given six months to check whether they are valid or not before the patient is removed from their list for good.

But for many practices, this is a huge amount of additional work. One practice manager in a university city, who did not wish to be named, says it is a particularly big issue for him, as students are a big focus of the list validation exercises.

He says the local area team wrote to the university to find out whether students were still living there. It used the say-so of the university offices to determine whether the students should be on the patient list or not.

He adds: ‘The last database I was sent had 1,700 names on it. They have not written to each of these people, they have acted on the hearsay of an administrator. They give us six months to tell them otherwise. We cannot afford to lose 1,700 patients.’

Because most of the students live off the university campus, the practice manager retained ‘around 80% to 90% of the patients on the database’ – but only after a huge administrative exercise. ‘It is completely a crude way of doing this,’ he adds.

Dr Jacqueline Marshall, a GP in Uxbridge, Middlesex, took drastic action to prevent her patients suffering. She says: ‘I’ve had to spend weekends in the practice obtaining evidence to prevent patients being removed erroneously. The list included the most vulnerable, for example older people, people with dementia, and mental health patients who did not reply to letters.’

It is not the first time there have been list cleansing drives. Pulse reported last year that PCTs were undertaking last-minute efforts to review their practice lists before handing over to NHS England, after the DH instigated a list-cleansing campaign in late 2011 to remove up to 2.5 million ‘ghost patients’ from GP lists across England.

But now, with NHS England facing extreme budget constraints, managers are facing even more pressure to trim their primary care budgets, and this scheme is an attractive way of reducing GP spend as it means they can cut £73 in annual global sum payments for each patient removed.

Savings

The first paragraph of NHS England’s policy states that ‘if a patient list is overstated, the contractor will receive more funding than it would ordinarily be entitled to and this presents a significant financial burden on NHS resources.’

It quotes 2010 figures from the Office for National Statistics (ONS), which estimated that the number of people on practice lists exceeds the national population by ‘approximately 2.8m people, which is equivalent to 5.2% inflation’. As a result, reducing the average percentage to 3% ‘would realise indicative savings of around £85m’.

And this money will be recouped directly from practice income. For a practice that has 200 patients taken off the list, this equates to a £15,000 reduction in income per year. Even wrongly removed patients that are later put back on patient lists could still have an impact on practice income as payments are based on the list size as recorded on a particular date.

NHS England says it is doing all it can to prevent distress to patients. An NHS spokesperson says: ‘NHS England takes all possible steps to contact patients and minimise the number who need to re-register – but there will always be some circumstances where patients do not respond and at that point we have to assume they have moved away and are therefore not in reality receiving services from that GP.’

Managers also took issue with the assumption that all patients who were removed and had to re-register were ‘wrongly removed’. The spokesperson added: ‘Patients can always re-register with that practice or another. It is therefore incorrect to say that they have been “wrongly removed”.’

But Dr Robert Morley, chair of the GPC’s contract and regulations subcommittee, says Pulse’s figures show the policy is ‘lunacy’ and that managers should be concentrating on other things, rather than chasing phantom cost-savings from general practice.

He says: ‘There will inevitably be swings and roundabouts, balancing “ghost” patients with unregistered patients.[But] to claim general practice is unfairly overfunded by the existence of ghosts is a complete fallacy.’

NHS England’s plans to validate GP lists

NHS England has issued a framework to local area teams of operating principles for list-cleansing drives. It says:

  • Area teams must work through practice registers as a continuous rolling programme of checks over a one- to three-year period.
  • This could include phased targeting of specific patient cohorts, including those who are part of screening programmes, patients aged 100 years or older, addresses with multiple occupancy, and university students.
  • Practices should screen proposed cohorts in advance of any letter-writing exercise.
  • Routine letters sent by practices that are returned by Royal Mail result in practices being given six months to provide confirmation of the patient’s address.
  • NHS England says ‘where confirmation… cannot be provided the patient is removed from the practice’.

Source: NHS England: Tackling list inflation for primary medical services, June 2013

http://www.england.nhs.uk/wp-content/uploads/2013/10/tack-infla.pdf

 

‘WE’VE HAD PATIENTS HERE FOR YEARS WHO HAVE SUDDENLY BEEN REMOVED’

From the FMS Global News Desk of Jeanne Hambleton 2 September 2014         PULSE TODAY

Dr Coral Jones on the damaging effects of list-cleansing

We are conscientious about cleaning up our lists. And yet several patients have been removed who have been here a long time. We do not know how it happened. They do not know how it happened. We have to go through a lot of work to get them back on the list again.

Patients are really upset. They come in and we say: ‘But you have moved.’ And they say: ‘We have not, what is this all about?’.

In theory, patients might miss out on care they need. We have a high list turnover so we only get alerted to the problem when someone comes in wanting an appointment.

If people do not come in, we would not know they are not registered. They might miss out on a screening invitation – anything could happen.

It is not got any better under NHS England. In terms of all the problems with payments to practices, this is just another thing on top. It is more work for us to do to maintain our list.

It does seem to affect patients randomly, but it is the patients who have been here for longer who are affected. People get upset, quite rightly – they think we have taken them off. We have had people who have been here for years and have suddenly been removed. It is probably easy for NHS England to target somewhere like this, where there is a higher list turnover.

In Hackney, the list turnover is 25% to 30%. This is part of the reason, along with deprivation and the need for translation services, that we need extra payments in east London – it is a huge amount of extra work, chasing round other practices, having to be summarised, having to be put on the system.

Dr Coral Jones is a GP in Tower Hamlets, east London

 

IN FULL: NHS ENGLAND’S GUIDE TO LOCAL AREA TEAMS ON LIST CLEANSING

From FMS Global News Desk of Jeanne Hambleton 2 September 2014               PULSE TODAY

NHS England released its guidance on ‘Tackling list inflation’ to its area teams in May 2013, in which it said £85m worth of savings could be made from cutting practice list sizes.To read the guidance, log on to the website below

http://www.england.nhs.uk/wp-content/uploads/2013/10/tack-infla.pdf

To send your comments to PULSE TODAY  https://www.facebook.com/PulseToday.co.uk;  https://twitter.com/pulsetoday .

For further Medical Troubles for GPs and flu vaccines allocation,  log on to jeannehambleton77.wordpress.com

See you tomorrow with hopefully some other news.  Jeanne

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Link | Posted on by | Tagged , , , , , , | Leave a comment

HUBBLE FINDS COMPANION STAR HIDDEN FOR 21 YEARS IN A SUPERNOVA’S GLARE

From FMS Global News Desk of Jeanne Hambleton Released: 9-Sep-2014  Source: Space Telescope Science Institute (STScI) Citations The Astrophysical Journal, July-2014

Newswise — Astronomers using NASA’s Hubble Space Telescope have discovered a companion star to a rare type of supernova. This observation confirms the theory that the explosion originated in a double-star system where one star fueled the mass-loss from the aging primary star.

This detection is the first time astronomers have been able to put constraints on the properties of the companion star in an unusual class of supernova called Type IIb. They were able to estimate the surviving star’s luminosity and mass, which provide insight into the conditions that preceded the explosion.

“A binary system is likely required to lose the majority of the primary star’s hydrogen envelope prior to the explosion. The problem is that, to date, direct observations of the predicted binary companion star have been difficult to obtain since it is so faint relative to the supernova itself,” said lead researcher Ori Fox of the University of California (UC) at Berkeley.

Astronomers estimate that a supernova goes off once every second somewhere in the universe. Yet they don’t fully understand how stars explode. Finding a “smoking gun” companion star provides important new clues to the variety of supernovae in the universe. “This is like a crime scene, and we finally identified the robber,” quipped team member Alex Filippenko, professor of astronomy at UC Berkeley. “The companion star stole a bunch of hydrogen before the primary star exploded.”

The explosion happened in the galaxy M81, which is about 11 million light-years away from Earth in the direction of the constellation Ursa Major (the Great Bear). Light from the supernova was first detected in 1993, and the object was designated SN 1993J. It was the nearest known example of this type of supernova, called a Type IIb, due to the specific characteristics of the explosion. For the past two decades astronomers have been searching for the suspected companion, thought to be lost in the glare of the residual glow from the explosion.

Observations made in 2004 at the W.M. Keck Observatory on Mauna Kea, Hawaii, showed circumstantial evidence for spectral absorption features that would come from a suspected companion. But the field of view is so crowded that astronomers could not be certain if the spectral absorption lines were from a companion object or from other stars along the line of sight to SN 1993J. “Until now, nobody was ever able to directly detect the glow of the star, called continuum emission,” Fox said.

The companion star is so hot that the so-called continuum glow is largely in ultraviolet (UV) light, which can only be detected above Earth’s absorbing atmosphere. “We were able to get that UV spectrum with Hubble. This conclusively shows that we have an excess of continuum emission in the UV, even after the light from other stars has been subtracted,” said team member Azalee Bostroem of the Space Telescope Science Institute (STScI), in Baltimore, Maryland.

When a massive star reaches the end of its lifetime, it burns though all of its material and its iron core collapses. The rebounding outer material is seen as a supernova. But there are many different types of supernovae in the universe. Some supernovae are thought to have exploded from a single-star system. Other supernovae are thought to arise in a binary system consisting of a normal star with a white dwarf companion, or even two white dwarfs. The peculiar class of supernova called Type IIb combines the features of a supernova explosion in a binary system with what is seen when single massive stars explode.

SN 1993J, and all Type IIb supernovae, are unusual because they do not have a large amount of hydrogen present in the explosion. The key question has been: how did SN 1993J lose its hydrogen? In the model for a Type IIb supernova, the primary star loses most of its outer hydrogen envelope to the companion star prior to exploding, and the companion continues to burn as a super-hot helium star.

“When I first identified SN 1993J as a Type IIb supernova, I hoped that we would someday be able to detect its suspected companion star,” said Filippenko. “The new Hubble data suggest that we have finally done so, confirming the leading model for Type IIb supernovae.”

The team combined ground-based data for the optical light and images from two Hubble instruments to collect ultraviolet light. They then constructed a multi-wavelength spectrum that matched what was predicted for the glow of a companion star.

Fox, Filippenko, and Bostroem say that further research will include refining the constraints on this star and definitively showing that the star is present.

The results were published in the July 20 issue of The Astrophysical Journal.

The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA’s Goddard Space Flight Center in Greenbelt, Md., manages the telescope. STScI conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C.

Astronomers using NASA’s Hubble Space Telescope have discovered a companion star to a rare type of supernova. This observation confirms the theory that the explosion originated in a double-star system where one star fueled the mass-loss from the aging primary star.

This detection is the first time astronomers have been able to put constraints on the properties of the companion star in an unusual class of supernova called Type IIb. They were able to estimate the surviving star’s luminosity and mass, which provide insight into the conditions that preceded the explosion.

“A binary system is likely required to lose the majority of the primary star’s hydrogen envelope prior to the explosion. The problem is that, to date, direct observations of the predicted binary companion star have been difficult to obtain since it is so faint relative to the supernova itself,” said lead researcher Ori Fox of the University of California (UC) at Berkeley.

Astronomers estimate that a supernova goes off once every second somewhere in the universe. Yet they don’t fully understand how stars explode. Finding a “smoking gun” companion star provides important new clues to the variety of supernovae in the universe. “This is like a crime scene, and we finally identified the robber,” quipped team member Alex Filippenko, professor of astronomy at UC Berkeley. “The companion star stole a bunch of hydrogen before the primary star exploded.”

The explosion happened in the galaxy M81, which is about 11 million light-years away from Earth in the direction of the constellation Ursa Major (the Great Bear). Light from the supernova was first detected in 1993, and the object was designated SN 1993J. It was the nearest known example of this type of supernova, called a Type IIb, due to the specific characteristics of the explosion. For the past two decades astronomers have been searching for the suspected companion, thought to be lost in the glare of the residual glow from the explosion.

Observations made in 2004 at the W.M. Keck Observatory on Mauna Kea, Hawaii, showed circumstantial evidence for spectral absorption features that would come from a suspected companion. But the field of view is so crowded that astronomers could not be certain if the spectral absorption lines were from a companion object or from other stars along the line of sight to SN 1993J. “Until now, nobody was ever able to directly detect the glow of the star, called continuum emission,” Fox said.

The companion star is so hot that the so-called continuum glow is largely in ultraviolet (UV) light, which can only be detected above Earth’s absorbing atmosphere. “We were able to get that UV spectrum with Hubble. This conclusively shows that we have an excess of continuum emission in the UV, even after the light from other stars has been subtracted,” said team member Azalee Bostroem of the Space Telescope Science Institute (STScI), in Baltimore, Maryland.

When a massive star reaches the end of its lifetime, it burns though all of its material and its iron core collapses. The rebounding outer material is seen as a supernova. But there are many different types of supernovae in the universe. Some supernovae are thought to have exploded from a single-star system. Other supernovae are thought to arise in a binary system consisting of a normal star with a white dwarf companion, or even two white dwarfs. The peculiar class of supernova called Type IIb combines the features of a supernova explosion in a binary system with what is seen when single massive stars explode.

SN 1993J, and all Type IIb supernovae, are unusual because they do not have a large amount of hydrogen present in the explosion. The key question has been: how did SN 1993J lose its hydrogen? In the model for a Type IIb supernova, the primary star loses most of its outer hydrogen envelope to the companion star prior to exploding, and the companion continues to burn as a super-hot helium star.

“When I first identified SN 1993J as a Type IIb supernova, I hoped that we would someday be able to detect its suspected companion star,” said Filippenko. “The new Hubble data suggest that we have finally done so, confirming the leading model for Type IIb supernovae.”

The team combined ground-based data for the optical light and images from two Hubble instruments to collect ultraviolet light. They then constructed a multi-wavelength spectrum that matched what was predicted for the glow of a companion star.

Fox, Filippenko, and Bostroem say that further research will include refining the constraints on this star and definitively showing that the star is present.

The results were published in the July 20 issue of The Astrophysical Journal.

HIDDEN STAR. PIC.2.p1438cw

This illustration shows the key steps in the evolution of a Type IIb supernova. Panel 1: Two very hot stars orbit about each other in a binary system. Panel 2: The slightly more massive member of the pair evolves into a bloated red giant and spills the hydrogen in its outer envelope onto the companion star. Panel 3: The more massive star explodes as a supernova. Panel 4: The companion star survives the explosion. Because it has locked up most of the hydrogen in the system, it is a larger and hotter star than when it was born. The fireball of the supernova fades. ARTIST’S ILLUSTRATION SCENARIO FOR TYPE IIB SN 1993J.

 

The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA’s Goddard Space Flight Center in Greenbelt, Md., manages the telescope. STScI conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C.

 

MYSTERIES OF SPACE DUST REVEALED

From the FMS Global News Desk of Jeanne Hambleton Released: 8-Sep-2014         Source: Argonne National Laboratory Citations Meteoritics & Planetary Science

 

Newswise — The first analysis of space dust collected by a special collector onboard NASA’s Stardust mission and sent back to Earth for study in 2006 suggests the tiny specks open a door to studying the origins of the solar system and possibly the origin of life itself.

This is the first time synchrotron light sources have been used to look at microscopic particles caught in the path of a comet. The Advanced Photon Source, the Advanced Light Source, and the National Synchrotron Light Source at the U.S. Department of Energy’s Argonne, Lawrence Berkeley and Brookhaven National Laboratories, respectively, enabled analysis that showed that the dust, which likely originated from beyond our solar system, is more complex in composition and structure than previously imagined.

“Fundamentally, the solar system and everything in it was ultimately derived from a cloud of interstellar gas and dust,” says Andrew Westphal, physicist at the University of California, Berkeley’s Space Sciences Laboratory and lead author on the paper published this month in Science titled “Evidence for interstellar origin of seven dust particles collected by the Stardust spacecraft”. “We’re looking at material that’s very similar to what made our solar system.”

The analysis tapped a variety of microscopy techniques including those that rely on synchrotron radiation. “Synchrotrons are extremely bright light sources that enable light to be focused down to the small size of these particles while providing unprecedented chemical identification,” said Hans Bechtel, principal scientific engineering associate at Berkeley Lab.

The APS helped the researchers create a map of the locations and abundances of the different elements in each tiny particle, said Argonne physicist Barry Lai, who was involved with the analysis at the APS.

“The Advanced Photon Source was unique in the capability to perform elemental imaging and analysis on such small particles — just 500 nanometers or less across,” Lai said. (That is so small that about 1,000 of them could fit in the period at the end of a sentence.) “This provided an important screening tool for differentiating the origin of each particle.”

Researchers used the scanning transmission x-ray and Fourier transform infrared microscopes at the ALS. The X-ray microscope ruled out tens of interstellar dust candidates because they contained aluminum, not found in space or other substances and possibly knocked off the spacecraft and embedded in the aerogel. The infrared spectroscopy helped to identify sample contamination that could ultimately be subtracted later.

“Almost everything we’ve known about interstellar dust has previously come from astronomical observations — either ground-based or space-based telescopes,” says Westphal. But telescopes don’t tell you about the diversity or complexity of interstellar dust, he says. “The analysis of these particles captured by Stardust is our first glimpse into the complexity of interstellar dust, and the surprise is that each of the particles are quite different from each other.”

Westphal, who is also affiliated with Berkeley Lab’s Advanced Light Source, and his 61 co-authors, including researchers from the University of Chicago and the Chicago Field Museum of Natural History, found and analyzed a total of seven grains of possible interstellar dust and presented preliminary findings. All analysis was non-destructive, meaning that it preserved the structural and chemical properties of the particles. While the samples are suspected to be from beyond the solar system, he says, potential confirmation of their origin must come from subsequent tests that will ultimately destroy some of the particles.

“Despite all the work we’ve done, we have limited the analyses on purpose,” Westphal explains. “These particles are so precious. We have to think very carefully about what we do with each particle.”

Between 2000 and 2002, the Stardust spacecraft, on its way to meet a comet named Wild 2, exposed the special collector to the stream of dust coming from outside our solar system. The mission objectives were to catch particles from both the comet coma as well as from the interstellar dust stream. When both collections were complete, Stardust launched its sample capsule back to earth where it landed in northwestern Utah. The analyses of Stardust’s cometary sample have been widely published in recent years, and the comet portion of the mission has been considered a success.

This new analysis is the first time researchers have looked at the microscopic particles collected en route to the comet. Both types of dust were captured by the spacecraft’s sample-collection trays, made of an airy material called aerogel separated by aluminum foil. Three of the space-dust particles (a tenth the size of comet dust) either lodged or vaporized within the aerogel while four others produced pits in the aluminum foil leaving a rim residue that fit the profile of interstellar dust.

Much of the new study relied on novel methods and techniques developed specifically for handling and analyzing the fine grains of dust, which are more than a thousand times smaller than a grain of sand. These methods are described in twelve other papers available now and next week in the journal of Meteoritics & Planetary Science.

One of the first research objectives was to simply find the particles within the aerogel. The aerogel panels were essentially photographed in tiny slices by changing the focus of the camera to different depths, which resulted in millions of images eventually stitched together into video. With the help of a distributed science project called Stardust@home, volunteer space enthusiasts from around the world combed through video, flagging tracks they believed were created by interstellar dust. More than 100 tracks have been found so far, but not all of these have been analyzed. Additionally, only 77 of the 132 aerogel panels have been scanned. Still, Westphal doesn’t expect more than a dozen particles of interstellar dust will be seen.

The researchers found that the two larger dust particles from the aerogel have a fluffy composition, similar to that of a snowflake, says Westphal. Models of interstellar dust particles had suggested a single, dense particle, so the lighter structure was unexpected. They also contain crystalline material called olivine, a mineral made of magnesium, iron, and silicon, which suggest the particles came from disks or outflows from other stars and were modified in the interstellar medium.

Three of the particles found in the aluminum foil were also complex, and contain sulfur compounds, which some astronomers believe should not occur in interstellar dust particles. Study of further foil-embedded particles could help explain the discrepancy.

Westphal says that team will continue to look for more tracks as well as take the next steps in dust analysis. “The highest priority is to measure relative abundance of three stable isotopes of oxygen,” he says. The isotope analysis could help confirm that the dust originated outside the solar system, but it’s a process that would destroy the precious samples. In the meantime, Westphal says, the team is honing their isotope analysis technique on artificial dust particles called analogs. “We have to be super careful,” he says. “We’re doing a lot of work on analogs to practice, practice, practice.”

The Advanced Photon Source is currently in the process of designing a proposed upgrade that would increase its ability to do such analyses, Lai said.

“With the APS upgrade, we would be able to increase the spatial resolution and to image faster — effectively scanning a larger area of the aerogel in a shorter time,” he said.

Since just over half of the aerogels have been checked for particles, there are plenty more waiting to be analyzed.

This research was supported by NASA, the Klaus Tschira Foundation, the Tawani Foundation, the German Science Foundation, and the Funds for Scientific Research, Flanders, Belgium. In addition to ALS, the research made use of the National Synchrotron Light Source at Brookhaven National Laboratory and the Advanced Photon Source at Argonne. All three x-ray light sources are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world’s most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab’s scientific expertise has been recognized with 13 Nobel prizes. Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations,

The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time.

 

 

BACTERIA HARBOR SECRET WEAPONS AGAINST ANTIBIOTICS

Hidden genetic complexity helps microbes evolve antibiotic resistance in diverse and unexpected ways

From FMS Global News Desk of Jeanne Hambleton Embargoed: 9-Sep-2014
Source: American Institute of Physics (AIP) Citations Biomicrofluidics

 

Newswise — WASHINGTON, D.C., September 9, 2014 – The ability of pathogenic bacteria to evolve resistance to antibiotic drugs poses a growing threat to human health worldwide. And scientists have now discovered that some of our microscopic enemies may be even craftier than we suspected, using hidden genetic changes to promote rapid evolution under stress and developing antibiotic resistance in more ways than previously thought. The results appear in a new paper in the journal Biomicrofluidics, from AIP Publishing.

In the paper, researchers from Princeton University in New Jersey describe how they observed two similar strains of E.coli bacteria quickly developing similar levels of antibiotic resistance using surprisingly different genetic mutations. Developing different solutions to the same problem shows the bacteria have a diverse arsenal of genetic “weapons” they can develop to fight antibiotics, potentially making them more versatile and difficult to defeat.

“Bacteria are clever – they have hidden ways to respond to stress that involve re-sculpting their genomes,” said Robert Austin, a biophysicist at Princeton who led the research team.

Realizing how effectively bacteria can survive drugs is a sobering thought, Austin said. “It teaches us that antibiotics have to be used much more carefully than they have been up to this point,” he said.

Accelerating Evolution

Austin and his colleagues specialize in developing unique, fluid-filled microstructures to test theories of bacterial evolution. Instead of using test tubes or Petri dishes – uniform environments that, Austin notes, exist only in the “ivied halls of academia” – the researchers build devices that they believe better mimic natural ecological niches.

The team uses a custom-made microfluidic device that contains approximately 1,000 connected microhabitats in which populations of bacteria grow. The device generates complex gradients of food and antibiotic drugs similar to what might be found in natural bacterial habitats like the gut or other compartments inside a human body.

“In complex environments the emergence of resistance can be far more rapid and profound than would be expected from test tube experiments,” Austin said.

From previous experiments with the complex microfabricated devices, the researchers knew that some ordinary, “wild-type” strains of E.coli bacteria quickly evolved antibiotic resistance. They wondered if a mutant strain called GASP, which reproduces more quickly with limited nutrients than the wild type, would develop the same type of antibiotic resistance when exposed to the same drug.

Secret Weapons Revealed

By sequencing the genomes of wild type and GASP bacterial colonies that has been exposed to the antibiotic ciprofloxacin (Cipro), the researchers found different genetic mutations could lead to similar levels of antibiotic resistance. For example, two different mutant strains emerged: one of the antibiotic-resistant GASP strains evolved in such a way that it no longer needed to make biofilms in order to survive stress. It did so by “borrowing” a piece of leftover DNA from a virus that infects bacteria. The other strain did not do this excision, indicating that in evolution the strains can hedge their bets.

Viruses routinely inject their own DNA into bacteria and sometimes DNA sequences remain that no longer seem to have any function in terms of viral replication. Under normal circumstances the leftover DNA may neither help nor hinder the bacteria, but in times of stress the bacteria can use the new DNA to rapidly evolve antibiotic resistant mutations.

The results demonstrate the subtlety and diversity of the tools that bacteria have to fight stress, said Austin. He wonders whether our remaining effective methods for killing bacteria, such as using ethanol to disinfect surfaces, are also vulnerable, and his team plans to test whether bacteria in their devices can evolve ethanol resistance.

The article, “You cannot tell a book by looking at the cover: cryptic complexity in bacterial evolution,” is authored by Qiucen Zhang, Julia Bos, Grigory Tarnopolskiy, James C. Sturm, Hyunsung Kim, Nader Pourmand, and Robert H. Austin. It will be published in the journal Biomicrofluidics on September 9, 2014.

The authors of this paper are affiliated with Princeton University, the University of Illinois, Urbana-Champaign, and the University of California, Santa Cruz.

ABOUT THE JOURNAL
Biomicrofluidics is an online-only journal from AIP Publishing designed to rapidly disseminate research that elucidates fundamental physicochemical mechanisms associated with microfluidic, nanofluidic, and molecular/cellular biophysical phenomena in addition to novel microfluidic and nanofluidic techniques for diagnostic, medical, biological, pharmaceutical, environmental, and chemical applications.E-coli. PIC. BMF-Austin-Photo-GelCompetition

This image shows two strains of E. coli bacteria (wild-type and GASP) competing with each other as they grow out on a flat surface. The wild-type bacteria appear green on the surface while the GASP bacteria appear red. When researchers added the bacteria to more complex microfluidic devices they observed the rapid evolution of different mutations for antibiotic resistance.

Back tomorrow, Jeanne

 

 

 

Link | Posted on by | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

NASA’S ORION SPACECRAFT NEARS COMPLETION, READY FOR FUELING

From the FMS Global News Desk of Jeanne Hambleton September 11, 2014 NASA.GOV. NEWS

NASA is making steady progress on its Orion spacecraft, completing several milestones this week at NASA’s Kennedy Space Center in Florida in preparation for the capsule’s first trip to space in December.

Engineers finished building the Orion crew module, attached it and the already-completed service module to the adapter that will join Orion to its rocket and transported the spacecraft to a new facility for fueling.

“Nothing about building the first of a brand new space transportation system is easy,” said Mark Geyer, Orion Program manager. “But the crew module is undoubtedly the most complex component that will fly in December. The pressure vessel, the heat shield, parachute system, avionics — piecing all of that together into a working spacecraft is an accomplishment. Seeing it fly in three months is going to be amazing.”

Finishing the Orion crew module marks the completion of all major components of the spacecraft. The other two major elements — the inert service module and the launch abort system — were completed in January and December, respectively. The crew module was attached to the service module in June to allow for testing before the finishing touches were put on the crew module.

The adapter that will connect Orion to the United Launch Alliance (ULA) Delta IV Heavy rocket was built by NASA’s Marshall Space Flight Center in Huntsville, Alabama. It is being tested for use on the agency’s Space Launch System rocket for future deep space missions.

NASA, Orion’s prime contractor Lockheed Martin, and ULA managers oversaw the move of the spacecraft Thursday from the Neil Armstrong Operations and Checkout Building to the Payload Hazardous Servicing Facility at Kennedy, where it will be fueled with ammonia and hyper-propellants for its flight test. Once fueling is complete, the launch abort system will be attached. At that point, the spacecraft will be complete and ready to stack on the Delta IV Heavy.

Orion is being built to send humans farther than ever before, including to an asteroid and Mars. Although the spacecraft will be uncrewed during its December flight test, the crew module will be used to transport astronauts safely to and from space on future missions. Orion will provide living quarters for up to 21 days, while longer missions will incorporate an additional habitat to provide extra space. Many of Orion’s critical safety systems will be evaluated during December’s mission, designated Exploration Flight Test-1, when the spacecraft travels about 3,600 miles into space.

Engineers and Technicians Installed Protective Shell on NASA’s Orion Spacecraft

The heat shield on NASA’s Orion spacecraft gets all the glory when it comes to protecting the spacecraft from the intense temperature of reentry. Although the blunt, ablative shield will see the highest temperatures – up to 4,000 degrees Fahrenheit on its first flight this December – the rest of the spacecraft is hardly left in the cold.

Engineers and technicians at NASA’s Kennedy Space Center have finished installing the cone-shaped back shell of Orion’s crew module – the protective cover on the sides that make up Orion’s upside down cone shape. It’s made up of 970 black tiles that should look very familiar – the same tiles protected the belly of the space shuttles as they returned from space.

But the space shuttles traveled at 17,000 miles per hour, while Orion will be coming in at 20,000 miles per hour on this first flight test. The faster a spacecraft travels through Earth’s atmosphere, the more heat it generates. So even though the hottest the space shuttle tiles got was about 2,300 degrees Fahrenheit, the Orion back shell could get up to 3,150 degrees, despite being in a cooler area of the vehicle.

And heat isn’t the only concern. While in space, Orion will be vulnerable to the regular onslaught of micrometeoroid orbital debris. Although micrometeoroid orbital debris is too tiny to track, and therefore avoid, it can do immense damage to a spacecraft – for instance, it could punch through a back shell tile. Below the tiles, the vehicle’s structure doesn’t often get hotter than about 300 degrees Fahrenheit, but if debris breeched the tile, the heat surrounding the vehicle during reentry could creep into the hole it created, possibly damaging the vehicle.

Debris damage can be repaired in space with techniques pioneered after the space shuttle Columbia accident. A good deal of information was gathered then on what amount of damage warranted a repair. But the heating environment Orion will experience is different than the shuttle’s was, and the old models don’t apply.

Engineers will begin verifying new models when Orion returns from its first flight test this December. Before installing the back shell, engineers purposely drilled long, skinny holes into two tiles to mimic damage from a micrometeoroid hit. Each 1 inch wide, one of the holes is 1.4 inches deep and the other is 1 inch deep. The two tiles with these mock micrometeoroid hits are 1.47 inches thick and are located on the opposite side of the back shell from Orion’s windows and reaction control system jets.

“We want to know how much of the hot gas gets into the bottom of those cavities,” said Joseph Olejniczak, manager of Orion aerosciences. “We have models that estimate how hot it will get to make sure it’s safe to fly, but with the data we’ll gather from these tiles actually coming back through Earth’s atmosphere, we’ll make new models with higher accuracy.”

A better understanding of the heating environment for damage on Orion’s heat shield will inform future decisions about what kind of damage may require a repair in space.

Orion protective shield PIC.1.2014-3489-m

 

 Inside the Operations and Checkout Building high bay at NASA’s Kennedy Space Center in Florida, technicians dressed in clean-room suits install a back shell tile panel onto the Orion crew module. NASA’s Orion crew capsule is the first spacecraft in history capable of taking humans to multiple destinations within deep space. Orion’s versatile design will allow it to safely carry crew, provide emergency abort capability, sustain the crew during long-duration missions and provide safe reentry from multiple destinations in the solar system. Orion’s first flight test, Exploration Flight Test-1 (EFT-1) is scheduled to launch from Cape Canaveral Air Force Base in Florida in fall, 2014. The next mission, Exploration Mission-1, will have an uncrewed Orion atop the SLS and will be the first fully integrated mission of the deep space program.Image Credit:  NASA/Dimitri Gerondidakis.

 

NASA’s Exploration Systems Development is building the agency’s crew vehicle, next generation rocket, and ground systems and operations to enable human exploration throughout deep space — a capability the world has not had for more than 40 years.

The Orion spacecraft, Space Launch System (SLS) and a modernized Kennedy spaceport will support missions to multiple deep space destinations extending beyond our Moon, to Mars and across our solar system. This innovative approach aligns with NASA’s bold new mission to design and build the capability to extend human existence to deep space.

The Rocket – Space Launch System

NASA’s Space Launch System (SLS) is the first rocket and launch system capable of powering humans, habitats and support systems to deep space — providing new opportunities for human and scientific exploration far beyond low-Earth orbit.

SLS will carry the Orion spacecraft, as well as cargo, equipment and scientific payloads into deep space. It will evolve from the 70 metric ton capability to an enhanced 130 metric ton capability, creating the world’s largest payload lifting launch vehicle of any previously manufactured in the United States.  SLS has produced flight hardware in support of the 2014 Exploration Flight Test-1 (EFT-1) mission that will be on the rocket to launch Exploration Mission-1 (EM-1).

ORION PIC 2gsdo_sept2014_orionusssalvor_0

 At Naval Base San Diego in California, a crane is used Sept. 11 to transfer the Orion boilerplate test vehicle into the USS Salvor, a safeguard-class rescue and salvage ship that will be used for Underway Recovery Test 4A. The ship will head out to sea for four days to test crew module crane recovery operations. NASA, Lockheed Martin and the U.S. Navy are conducting the test to prepare for recovery of the Orion crew module on its return from a deep space mission. Image Credit: NASA/Kim Shiflett

 

NASA, Navy Prepare for Orion Spacecraft to Make a Splash

NASA ORION SEA RESCUE PIC.3.2014-3320-m_0

U.S. Navy personnel use a rigid hull inflatable boat to approach the Orion boilerplate test article during an evolution of the Underway Recovery Test 2 in the Pacific Ocean off the coast of San Diego, California on Aug. 2, 2014.Image Credit: NASA/Kim Shiflett

A team of technicians, engineers, sailors and divers just wrapped up a successful week of testing and preparing for various scenarios that could play out when NASA’s new Orion spacecraft splashes into the Pacific Ocean following its first space flight test in December.

NASA and Orion prime contractor Lockheed Martin teamed up with the U.S. Navy and the Defense Department’s Human Space Flight Support Detachment 3 to try different techniques for recovering the 20,500-pound spacecraft safely during this second “underway recovery test.”

To address some of the lessons learned from the first recovery test in February, the team brought new hardware to test and tested a secondary recovery method that employs an onboard crane to recover Orion, as an alternative to using the well deck recovery method, which involves the spacecraft being winched into a flooded portion of the naval vessel.

“Anchorage provided a unique, validated capability to support NASA’s request for operational support without adversely impacting the Navy’s primary warfighting mission,” said Cmdr. Joel Stewart, commanding officer of the Navy vessel.

“This unique mission gave Anchorage sailors an opportunity to hone their skills for the routine missions of recovering vehicles in the well deck and operating rigid-hulled inflatable boats in the open water while supporting NASA. The testing with NASA was a success and Anchorage sailors continue to raise the bar, completing missions above and beyond any expectations.”Screen Shot 2014-09-13 at 19.32.12

 The Orion boilerplate test vehicle is slightly lifted by crane from the water to test the proof of concept basket lift method during an evolution of the Underway Recovery Test 2 near the USS Anchorage in the Pacific Ocean off the coast of San Diego, California on Aug. 3, 2014.Image Credit NASA/Kim Shiflett

Homeward Bound

After enduring the extreme environment of space, Orion will blaze back through Earth’s atmosphere at speeds near 20,000 mph and temperatures approaching 4,000 degrees Fahrenheit. Its inaugural journey will end in the Pacific, off the Southern California coast, where a U.S. Navy ship will be waiting to retrieve it and return it to shore.

“We learned a lot about our hardware, gathered good data, and the test objectives were achieved,” said Mike Generale, NASA recovery operations manager in the Ground Systems Development and Operations Program.

“We were able to put Orion out to sea and safely bring it back multiple times. We are ready to move on to the next step of our testing with a full dress rehearsal landing simulation on the next test.”

 Back tomorrow all being well. Jeanne

 

 

Link | Posted on by | Tagged | Leave a comment

NASA’S MARS CURIOSITY ROVER ARRIVES AT MARTIAN MOUNTAIN

From the FMS Global News Desk of Jeanne Hambleton NASA.GOV. NEWS            Released September 11 2014 by Dwayne Brown

Mars & MartianMission.PIC. 14-245_1

 This image shows the old and new routes of NASA’s Mars Curiosity rover and is composed of color strips taken by the High Resolution Imaging Science Experiment, or HiRISE, on NASA’s Mars Reconnaissance Orbiter. This new route provides excellent access to many features in the Murray Formation. And it will eventually pass by the Murray Formation’s namesake, Murray Buttes, previously considered to be the entry point to Mt. Sharp.Image Credit: NASA/JPL-Caltech/Univ. of Arizona

 

NASA’s Mars Curiosity rover has reached the Red Planet’s Mount Sharp, a Mount-Rainier-size mountain at the center of the vast Gale Crater and the rover mission’s long-term prime destination.

“Curiosity now will begin a new chapter from an already outstanding introduction to the world,” said Jim Green, director of NASA’s Planetary Science Division at NASA Headquarters in Washington. “After a historic and innovative landing along with its successful science discoveries, the scientific sequel is upon us.”

Curiosity’s trek up the mountain will begin with an examination of the mountain’s lower slopes. The rover is starting this process at an entry point near an outcrop called Pahrump Hills, rather than continuing on to the previously-planned, further entry point known as Murray Buttes. Both entry points lay along a boundary where the southern base layer of the mountain meets crater-floor deposits washed down from the crater’s northern rim.

“It has been a long but historic journey to this Martian mountain,” said Curiosity Project Scientist John Grotzinger of the California Institute of Technology in Pasadena. “The nature of the terrain at Pahrump Hills and just beyond it is a better place than Murray Buttes to learn about the significance of this contact. The exposures at the contact are better due to greater topographic relief.”

After 2 years and nearly 9 kilometers of driving, NASA’s Mars Curiosity has arrived at the base of Mount Sharp.

The decision to head uphill sooner, instead of continuing to Murray Buttes, also draws from improved understanding of the region’s geography provided by the rover’s examinations of several outcrops during the past year. Curiosity currently is positioned at the base of the mountain along a pale, distinctive geological feature called the Murray Formation. Compared to neighboring crater-floor terrain, the rock of the Murray Formation is softer and does not preserve impact scars, as well. As viewed from orbit, it is not as well-layered as other units at the base of Mount Sharp.

Curiosity made its first close-up study last month of two Murray Formation outcrops, both revealing notable differences from the terrain explored by Curiosity during the past year. The first outcrop, called Bonanza King, proved too unstable for drilling, but was examined by the rover’s instruments and determined to have high silicon content. A second outcrop, examined with the rover’s telephoto Mast Camera, revealed a fine-grained, platy surface laced with sulfate-filled veins.

While some of these terrain differences are not apparent in observations made by NASA’s Mars orbiters, the rover team still relies heavily on images taken by the agency’s Mars Reconnaissance Orbiter (MRO) to plan Curiosity’s travel routes and locations for study.

For example, MRO images helped the rover team locate mesas that are over 60 feet (18 meters) tall in an area of terrain shortly beyond Pahrump Hills, which reveal an exposure of the Murray Formation uphill and toward the south. The team plans to use Curiosity’s drill to acquire a sample from this site for analysis by instruments inside the rover. The site lies at the southern end of a valley Curiosity will enter this week from the north.

Though this valley has a sandy floor the length of two football fields, the team expects it will be an easier trek than the sandy-floored Hidden Valley, where last month Curiosity’s wheels slipped too much for safe crossing.

MSL – Senior Review Proposal, Science Sections

Curiosity reached its current location after its route was modified earlier this year in response to excessive wheel wear. In late 2013, the team realized a region of Martian terrain littered with sharp, embedded rocks was poking holes in four of the rover’s six wheels.

This damage accelerated the rate of wear and tear beyond that for which the rover team had planned. In response, the team altered the rover’s route to a milder terrain, bringing the rover farther south, toward the base of Mount Sharp.

“The wheels issue contributed to taking the rover farther south sooner than planned, but it is not a factor in the science-driven decision to start ascending here rather than continuing to Murray Buttes first,” said Jennifer Trosper, Curiosity Deputy Project Manager at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California.

“We have been driving hard for many months to reach the entry point to Mount Sharp,” Trosper said. “Now that we have made it, we will be adjusting the operations style from a priority on driving to a priority on conducting the investigations needed at each layer of the mountain.”

After landing inside Gale Crater in August 2012, Curiosity fulfilled in its first year of operations its major science goal of determining whether Mars ever offered environmental conditions favorable for microbial life. Clay-bearing sedimentary rocks on the crater floor, in an area called Yellowknife Bay, yielded evidence of a lake bed environment billions of years ago that offered fresh water, all of the key elemental ingredients for life, and a chemical source of energy for microbes.

NASA’s Mars Science Laboratory Project continues to use Curiosity to assess ancient habitable environments and major changes in Martian environmental conditions. The destinations on Mount Sharp offer a series of geological layers that recorded different chapters in the environmental evolution of Mars.

The Mars Exploration Rover Project is one element of NASA’s ongoing preparation for a human mission to the Red Planet in the 2030s. JPL built Curiosity and manages the project and MRO for NASA’s Science Mission Directorate in Washington.

 

SPACE STATION EXPEDITION 40 CREW RETURNS TO EARTH, LANDS SAFELY IN KAZAKHSTAN

CREW Returns to Earth PIC Space Station. trio_1_0

 A trio of International Space Station crew members returned to Earth and landed in Kazakhstan at 10:23 p.m. EDT on Sept. 10, 2014 (8:23 a.m., Sept. 11, in local time) after spending 167 days aboard the orbital laboratory. Seen left to right, Oleg Artemyev and Alexander Skvortsov of the Russian Federal Space Agency (Roscosmos) and NASA’s Steve Swanson were examined by medical personnel after being removed from their Russian Soyuz spacecraft.Image Credit: NASA Television

 

Three crew members from the International Space Station (ISS) returned to Earth Wednesday after 169 days of science and technology research in space, including a record 82 hours of research in a single week, which happened in July.

Expedition 40 Commander Steve Swanson of NASA and Flight Engineers Alexander Skvortsov and Oleg Artemyev of the Russian Federal Space Agency (Roscosmos) touched down southeast of the remote town of Dzhezkazgan in Kazakhstan at 10:23 p.m. EDT Wednesday, Sept. 10 (8:23 a.m., Sept. 11, in Dzhezkazgan).

During their time aboard the space station, the crew members participated in a variety of research focusing on Earth remote sensing, human behavior and performance and studies of bone and muscle physiology.

One of several key research focus areas during Expedition 40 was human health management for long duration space travel as NASA and Roscosmos prepare for two crew members to spend one year aboard the orbiting laboratory in 2015.

During their time on the station, the crew members orbited Earth more than 2,700 times, traveled more than 71.7 million miles and welcomed five cargo spacecraft. Two Russian ISS Progress cargo spacecraft docked to the station bringing tons of supplies in April and July. The fifth and final European Space Agency (ESA) Automated Transfer Vehicle also launched to the station in July with the spacecraft bearing the name of Belgian physicist Georges Lemaitre, who is considered the father of the big-bang theory.

SpaceX launched a Dragon cargo spacecraft to the station in April, the company’s third of at least 12 planned commercial resupply missions. In July, Orbital Sciences’ Cygnus spacecraft completed its third of at least eight resupply missions scheduled through 2016 under NASA’s Commercial Resupply Services contract.

During his time on the complex, Swanson ventured outside the confines of the space station for a spacewalk to replace a backup computer relay box that unexpectedly failed. Skvortsov and Artemyev conducted two spacewalks during Expedition 40, totaling 12 hours and 34 minutes.

The space station is more than a scientific research platform. It also serves as a test bed to demonstrate new technology. Even routine tasks, such as monitoring and operating the carbon dioxide removal system, provides valuable data for next-generation life support systems. Carbon dioxide removal from the pressurized compartments of the station proved to work differently in space than predicted by ground tests.

The crew also saw the arrival of the Haptics-1 experiment, part of an effort to develop technology that would allow an astronaut in orbit to control a robot as it explores its target, such as an asteroid or Mars, during future human exploration missions.

Having completed his third space station mission, Swanson now has spent a total of 196 days in space. Skvortsov has accumulated 345 days in space on two flights, and Artemyev accrued 169 days in space on his first mission.

Expedition 41 now is operating aboard the station with Max Suraev of Roscosmos in command. Suraev and his crewmates, Flight Engineers Reid Wiseman of NASA and Alexander Gerst of ESA, will tend to the station as a three-person crew until the arrival in two weeks of three new crew members: Barry Wilmore of NASA and Alexander Samokutyaev and Elena Serova of Roscosmos. Wilmore, Samokutyaev and Serova are scheduled to launch from Kazakhstan Thursday, Sept. 25.

 

HOW EVOLUTIONARY PRINCIPLES COULD HELP SAVE OUR WORLD

Battling modern threats to food, land and health with applied evolutionary biology

From FMS Global News Desk of Jeanne Hambleton September 11 2014                    National Science Foundation  (NSF)

 

The age of the Anthropocene–the scientific name given to our current geologic age–is dominated by human impacts on our environment. A warming climate. Increased resistance of pathogens and pests. A swelling population. Coping with these modern global challenges requires application of what one might call a more-ancient principle: evolution.

That is the recommendation of a diverse group of researchers, in a paper published today in the online version of the journal Science. A majority of the nine authors on the paper have received funding from the National Science Foundation (NSF).

“Evolution is not just about the past anymore, it is about the present and the future,” said Scott Carroll, an evolutionary ecologist at University of California-Davis and one of the paper’s authors. Addressing societal challenges–food security, emerging diseases, biodiversity loss–in a sustainable way is “going to require evolutionary thinking.”

The paper reviews current uses of evolutionary biology and recommends specific ways the field can contribute to the international sustainable development goals (SDGs), now in development by the United Nations.

Evolutionary biology has “tremendous potential” to solve many of the issues highlighted in the SDGs, said Peter Søgaard Jørgensen, another Science author from the University of Copenhagen’s Center for Macroecology, Evolution and Climate. The field accounts for how pests may adapt rapidly to our interventions and how vulnerable species struggle to adapt to global change. The authors even chose this release date to coincide with the upcoming meeting of the UN General Assembly, which starts September 24.

Their recommendations include gene therapies to treat disease, choosing drought-and-flood-resistant crop varieties and altering conservation strategies to protect land with high levels of genetic diversity.

“Many human-engineered solutions to societal problems have turned out to have a relatively short useful life because evolution finds ways around them,” said George Gilchrist, program officer in NSF’s Division of Environmental Biology, which funded many of the Science authors.

“Carroll and colleagues propose turning the tables and using evolutionary processes to develop more robust and dynamic solutions.”

Applied evolutionary biology just recently made the leap from an academic discipline to a more-practical one, spurred by an effort within the community to better synthesize and share research insights. And–above all–increasing environmental pressures.

“The fact that we are changing the world means that evolutionary processes are going to be affected,” said Thomas Smith, of the Department of Ecology and Evolutionary Biology at the University of California, Los Angeles (UCLA) and another Science author.

The question is, according to Smith: Do we want to be engaged in this change, or not?

The paper also serves as a platform for establishing a cross-disciplinary field of applied evolutionary biology, Carroll said, and a way to promote the field as a path to sustainable development solutions.

“Evolutionary biology touches on many elements of the life sciences, from medicine to conservation biology to agriculture,” said Smith.

“And unfortunately, there has not been an effort to unify across these fields.”

This disconnect exists despite the use of evolutionary tactics in many disciplines: treating HIV with a cocktail of drugs, for example, to slow pathogen resistance. And the effects of evolution already swirl in the public consciousness–and spark debate.

Think of the arguments for and against genetically modified crops, or warnings about the increasing price of combating drug resistance (which costs more than $20 billion in the U.S. each year, according to the nonprofit Alliance for Prudent Use of Antibiotics).

Seldom are these issues described in an evolutionary context, said Smith. “We are missing an opportunity to educate the public about the importance of evolutionary principles in our daily lives.”

In conservation, evolutionary approaches are often disregarded because of the belief that evolution is beyond our ability to manage and too slow to be useful, according to a paper Smith co-authored in the journal Annual Review of Ecology, Evolution and Systematics (AREES).

That article, recently published online, also tackles applied evolution. It was co-authored by Carroll, University of Maine Biologist Michael Kinnison, Sharon Strauss–of the Department of Evolution and Ecology at University of California-Davis–and Trevon Fuller of UCLA’s Tropical Research Institute. All are NSF-funded. Kinnison and Strauss are also co-authors on the Science paper.

Yet contemporary evolution–what scientists are observing now–happens on timescales of months to a few hundred years, and can influence conservation management outcomes, according to the AREES paper.

Considering the evolutionary potential and constraints of species is also essential to combat “evolutionary mismatch.” This means the environment a species exists in, and the one it has evolved to exist in, no longer match.

Such disharmony can be “dire and costly,” the authors write in Science, citing the increasingly sedentary lifestyles–and processed food diets–of modern humans. These lifestyles are linked with increasing rates of obesity, diabetes and cardiovascular disorders. Restoring our health requires greater physical activity and less refined carbohydrates: “Diets and activity levels closer to those of the past, to which we are better adapted,” the Science paper said.

Implementing applied evolutionary principles often requires very careful thinking about social incentives, said Jørgensen. Public vaccination programs, for example, and pest control in crops often create tension between individual and public good.

Applied evolution, therefore, requires input from biologists, doctors, agriculturalists: “We are making a call for policy makers, decision-makers at all levels,” to be involved, Jørgensen said.

Evolutionary biologists do not have all the answers, said Smith. And using applied evolution is not without risk. But we have reached a point “where we need to take risks in many cases,” he said. “We cannot just sit back and be overly conservative, or we are going to lose the game.”

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2014, its budget is $7.2 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly

Back tomorrow with luck. Jeanne

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Link | Posted on by | Tagged , , , , , , , , , , , , , , , , | Leave a comment

USING GENOMICS TO FOLLOW THE PATH OF EBOLA

From the FMS Global News Desk of Jeanne Hambleton
NIH Posted on September 2, 2014 by Dr. Francis Collins
National  Institutes of  Health

SABVE PIC.1. ebola-virus-particles

Caption: Colorized scanning electron micrograph of filamentous Ebola virus particles (blue) budding from a chronically infected VERO E6 cell (yellow-green).Credit: National Institute of Allergy and Infectious Diseases, NIH

Long before the current outbreak of Ebola Virus Disease (EVD) began in West Africa, NIH-funded scientists had begun collaborating with labs in Sierra Leone and Nigeria to analyze the genomes and develop diagnostic tests for the virus that caused Lassa fever, a deadly hemorrhagic disease related to EVD. But when the outbreak struck in February 2014, an international team led by NIH Director’s New Innovator Awardee Pardis Sabeti quickly switched gears to focus on Ebola.

In a study just out in the journal Science [1], this fast-acting team reported that it has sequenced the complete genetic blueprints, or genomes, of 99 Ebola virus samples obtained from 78 patients in Sierra Leone. This new genomic data has revealed clues about the origin and evolution of the Ebola virus, as well as provided insights that may aid in the development of better diagnostics and inform efforts to devise effective therapies and vaccines.

To help advance such research, Sabeti’s team deposited its Ebola genome sequences, even prior to publication, in a database run by NIH’s National Center for Biotechnology Information’s (NCBI), which means the data is immediately and freely available to researchers around the world. Access to this genomic data should accelerate international efforts to figure out ways of detecting, treating, and, ultimately, preventing infection by this deadly virus.

Sophisticated genomic analyses by Sabeti and her colleagues show that the current Ebola Virus Disease outbreak most likely originated less than a year ago with a single person, starting at the funeral of a traditional healer in Guinea and eventually spreading to Sierra Leone and other nations.

In contrast, previous EVD outbreaks appear to have been fueled primarily by humans being directly exposed to infected fruit bats or other animals harboring the virus. These findings underscore the need to take proper precautions, as outlined by the Centers for Disease Control and Prevention, to prevent the spread of the virus from human to human.

As for possible implications of this work for diagnosis and treatment, Sabeti’s team found that the Ebola virus strain (EBOV) responsible for the 2014 outbreak in West Africa appears to have evolved from a strain that caused an outbreak in Central Africa in 2004, with changes occurring in nearly 400 regions of the genome.

These findings are important, because some of the tests currently used to diagnose EBOV might fail to work in the presence of these genetic changes—meaning they could give false negative test results in some people who are actually infected with the virus.

Now, thanks to Sabeti’s genomic profiling of EBOV, it should be possible to enhance diagnostic tests to pick up nearly all forms of the virus. Continued genomic sequencing will be critical to keep the diagnostics up-to-date, because the Ebola virus will continue to evolve over the course of the outbreak.

Sabeti, who is a computational geneticist at the Broad Institute of Harvard and MIT in Cambridge, MA, says among the urgent questions still to be answered is whether these genetic changes might influence the speed at which the virus spreads or the severity of the disease it causes.

Screen Shot 2014-09-11 at 20.44.39

 

In Memory of Sheik Humarr Khan, who was part of the research team that sequenced the Ebola Virus genome. Dr. Khan died from Ebola Virus Disease while overseeing patient care at Kenema Government Hospital in Sierra Leone. Credit: Pardis C. Sabeti

As of August 28, the Ebola virus outbreak in West Africa has infected at least 3,069 people and killed 1,552 [2], making it the largest outbreak on record since the disease was first identified in 1976. Sadly, among its victims were five members of Sabeti’s team who died before their paper was published, including Dr. Sheik Humarr Khan, a leading virologist in Sierra Leone.

So, let me close by paying tribute to these brave researchers—and all of the other dedicated scientists and healthcare workers on the front lines of the Ebola Virus Disease epidemic and other public health emergencies around the globe. You bring both comfort and hope to those who need it the most. From all of us here at NIH, let me convey our gratitude for your dedication.

References:
[1] Genomic surveillance elucidates Ebola virus origin and transmission during the 2014 outbreak. Gire SK, Sabeti PC, et al. Science (published online August 28, 2014) [2] Ebola virus disease update—West Africa (WHO).Links: Sabeti Lab, Emerging Disease or Emerging Diagnosis? (NIH Common Fund Video Competition),  Understanding Ebola and Marburg hemorrhagic fevers (NIAID). Ebola Hemorrhagic Fever, Prevention (CDC), CDC: Stopping the Ebola Outbreak, NIH support: Common Fund, National Institute of Allergy and Infectious Diseases.

Unknown

Those brave researchers who have paid the ultimate price, should be recognised by all of us for their selfless dedication.  May they Rest In Peace. The huge loss of life with this awful  disease is shocking, but how has this happened when we know so much and have so much in this age of technology. J.

x  x  x

PROFESSORS PROVIDE MOST UPDATED INFORMATION ON ASPIRIN IN THE PREVENTION OF A FIRST HEART ATTACK

From the FMS Global News Desk of Jeanne Hambleton Released: 2-Sep-2014
Source: Florida Atlantic University Citations Trends in Cardiovascular Medicine

Newswise — The first researcher in the world to discover that aspirin prevents a first attack, Charles H. Hennekens, M.D., Dr.P.H., the first Sir Richard Doll professor and senior academic advisor to the dean in the Charles E. Schmidt College of Medicine at Florida Atlantic University, has published a comprehensive review in the current issue of the journal Trends in Cardiovascular Medicine.

Hennekens and his co-author James E. Dalen, M.D., M.P.H., executive director of the Weil Foundation and dean emeritus, University of Arizona College of Medicine, provide the most updated information on aspirin in the prevention of a first heart attack.

Hennekens also presented these findings from the article titled “Aspirin in the Primary Prevention of Cardiovascular Disease: Current Knowledge and Future Research Needs,” on Saturday, Aug. 30 at a “Meet the Experts” lecture at the European Society of Cardiology meetings in Barcelona, Spain.

Serving as chair of a symposium on Sunday, Aug. 31, he also delivered a lecture on “Evolving Concepts in Cardiovascular Prevention: Aspirin Then and Now.”

In the article, Hennekens and Dalen emphasize that the evidence in treatment indicates that all patients having a heart attack or who have survived a prior event should be given aspirin. In healthy individuals, however, they state that any decision to prescribe aspirin should be an individual clinical judgment by the healthcare provider that weighs the absolute benefit in reducing the risk of a first heart against the absolute risk of major bleeding.

“The crucial role of therapeutic lifestyle changes and other drugs of life saving benefit such as statins should be considered with aspirin as an adjunct, not alternative,” said Hennekens.

“The benefits of statins and aspirin are, at the very least, additive. The more widespread and appropriate use of aspirin in primary prevention is particularly attractive, especially in developing countries where cardiovascular disease is emerging as the leading cause of death.”

Hennekens also notes that aspirin is generally widely available over the counter and is extremely inexpensive. He cautions, however, that more evidence is necessary in intermediate risk subjects before general guidelines should be made.

Among the numerous honors and recognition Hennekens has received include the 2013 Fries Prize for Improving Health for his seminal contributions to the treatment and prevention of cardiovascular disease, the 2013 Presidential Award from his alma mater, Queens College for his distinguished contributions to society, the 2013 honoree as part of FAU’s Charles E. Schmidt College of Medicine from the American Heart Association for reducing deaths from heart attacks and strokes, and the 2014 honoree from the Ochsner Foundation for his seminal research on smoking and disease.

From 1995 to 2005, Science Watch ranked Hennekens as the third most widely cited medical researcher in the world and five of the top 20 were his former trainees and/or fellows. In 2012, Science Heroes ranked Hennekens No. 81 in the history of the world for having saved more than 1.1 million lives.

About Florida Atlantic University:
Florida Atlantic University, established in 1961, officially opened its doors in 1964 as the fifth public university in Florida. Today, the University, with an annual economic impact of $6.3 billion, serves more than 30,000 undergraduate and graduate students at sites throughout its six-county service region in southeast Florida.

FAU’s world-class teaching and research faculty serves students through 10 colleges: the Dorothy F. Schmidt College of Arts and Letters, the College of Business, the College for Design and Social Inquiry, the College of Education, the College of Engineering and Computer Science, the Graduate College, the Harriet L. Wilkes Honors College, the Charles E. Schmidt College of Medicine, the Christine E. Lynn College of Nursing and the Charles E. Schmidt College of Science.

FAU is ranked as a High Research Activity institution by the Carnegie Foundation for the Advancement of Teaching. The University is placing special focus on the rapid development of three signature themes – marine and coastal issues, biotechnology and contemporary societal challenges – which provide opportunities for faculty and students to build upon FAU’s existing strengths in research and scholarship.

DRINKING TOO MUCH WATER CAN BE FATAL TO ATHLETES

From the FMS Global News Desk of Jeanne Hambleton Released: 2-Sep-2014
Source: Loyola University Health System Citations British Journal of Sports Medicine

Newswise — MAYWOOD, Ill. (Sept. 2, 2014) – The recent deaths of two high school football players illustrate the dangers of drinking too much water and sports drinks, according to Loyola University Medical Center sports medicine physician Dr. James Winger.

Over-hydration by athletes is called exercise-associated hyponatremia. It occurs when athletes drink even when they are not thirsty. Drinking too much during exercise can overwhelm the body’s ability to remove water. The sodium content of blood is diluted to abnormally low levels. Cells absorb excess water, which can cause swelling — most dangerously in the brain.

Hyponatremia can cause muscle cramps, nausea, vomiting, seizures, unconsciousness, and, in rare cases, death.

Georgia football player Zyrees Oliver reportedly drank 2 gallons of water and 2 gallons of a sports drink. He collapsed at home after football practice, and died later at a hospital. In Mississippi, Walker Wilbank was taken to the hospital during the second half of a game after vomiting and complaining of a leg cramp. He had a seizure in the emergency room and later died. A doctor confirmed he had exercise-associated hyponatremia.

And in recent years, there have been more than a dozen documented and suspected runners’ deaths from hyponatremia.

Winger said it is common for coaches to encourage athletes to drink profusely, before they get thirsty. But he noted that expert guidelines recommend athletes drink only when thirsty. Winger said athletes should not drink a predetermined amount, or try to get ahead of their thirst.

Drinking only when thirsty can cause mild dehydration. “However, the risks associated with dehydration are small,” Winger said. “No one has died on sports fields from dehydration, and the adverse effects of mild dehydration are questionable. But athletes, on rare occasions, have died from over-hydration.”

Winger is co-author of a 2011 study that found that nearly half of Chicago-area recreational runners surveyed may be drinking too much fluid during races. Winger and colleagues found that, contrary to expert guidelines, 36.5 percent of runners drink according to a present schedule or to maintain a certain body weight and 8.9 percent drink as much as possible.

“Many athletes hold unscientific views regarding the benefits of different hydration practices,” Winger and colleagues concluded. Their study was published in the British Journal of Sports Medicine.

Winger is an associate professor in the Department of Family Medicine of Loyola University Chicago Stritch School of Medicine.

Link | Posted on by | Tagged , , , , , , , , , , , , , , | 1 Comment

HOW THE BRAIN FINDS WHAT IT IS LOOKING FOR

Study reveals how the brain processes color and motion, provides new understanding of attention

From FMS Global News Desk of Jeanne Hambleton Embargoed: 4-Sep-2014
Source: University of Chicago Medical Center

 

Newswise — Despite the barrage of visual information the brain receives, it retains a remarkable ability to focus on important and relevant items. This fall, for example, NFL quarterbacks will be rewarded handsomely for how well they can focus their attention on color and motion – being able to quickly judge the jersey colors of teammates and opponents and where they’re headed is a valuable skill. How the brain accomplishes this feat, however, has been poorly understood.

Now, University of Chicago scientists have identified a brain region that appears central to perceiving the combination of color and motion. They discovered a unique population of neurons that shift in sensitivity toward different colors and directions depending on what is being attended – the red jersey of a receiver headed toward an end zone, for example. The study, published Sept. 4 in the journal Neuron, sheds light on a fundamental neurological process that is a key step in the biology of attention.

“Most of the objects in any given visual scene are not that important, so how does the brain select or attend to important ones?” said study senior author David Freedman, PhD, associate professor of neurobiology at the University of Chicago.

“We have zeroed in on an area of the brain that appears central to this process. It does this in a very flexible way, changing moment by moment depending on what is being looked for.”

The visual cortex of the brain possesses multiple, interconnected regions that are responsible for processing different aspects of the raw visual signal gathered by the eyes. Basic information on motion and color are known to route through two such regions, but how the brain combines these streams into something usable for decision-making or other higher-order processes remained unclear.

To investigate this process, Freedman and postdoctoral fellow Guilhem Ibos, PhD, studied the response of individual neurons during a simple task. Monkeys were shown a rapid series of visual images. An initial image showed either a group of red dots moving upwards or yellow dots moving downwards, which served as an instruction for which specific colors and directions were relevant during that trial.

The subjects were rewarded when they released a lever when this image later reappeared. Subsequent images were composed of different colors of dots moving in different directions, among which was the initial image.

Dynamic neurons
Freedman and Ibos looked at neurons in the lateral intraparietal area (LIP), a region highly interconnected with brain areas involved in vision, motor control and cognitive functions. As subjects performed the task and looked for a specific combination of color and motion, LIP neurons became highly active. They did not respond, however, when the subjects passively viewed the same images without an accompanying task.

When the team further investigated the responses of LIP neurons, they discovered that the neurons possessed a unique characteristic. Individual neurons shifted their sensitivity to color and direction toward the relevant color and motion features for that trial. When the subject looked for red dots moving upwards, for example, a neuron would respond strongly to directions close to upward motion and to colors close to red. If the task was switched to another color and direction seconds later, that same neuron would be more responsive to the new combination.

“Shifts in feature tuning had been postulated a long time ago by theoretical studies,” Ibos said.

“This is the first time that neurons in the brain have been shown to shift their selectivity depending on which features are relevant to solve a task.”

Freedman and Ibos developed a model for how the LIP brings together both basic color and motion information. Attention likely affects that process through signals from higher-order areas of the brain that affect LIP neuron selectivity. The team believes that this region plays an important role in making sense of basic sensory information, and they are trying to better understand the brain-wide neuronal circuitry involved in this process.

“Our study suggests that this area of the brain brings together information from multiple areas throughout the brain,” Freedman said.

“It integrates inputs – visual, motor, cognitive inputs related to memory and decision making – and represents them in a way that helps solve the task at hand.”

The study, “Dynamic Integration of Task-Relevant Visual Features in Posterior Parietal Cortex,” was supported by the National Institutes of Health and National Science Foundation, with additional support from a McKnight Scholar award, the Alfred P. Sloan Foundation, The Brain Research Foundation and the Fyssen Foundation.

 

COCAINE REWIRES THE BRAIN: NEW STUDY TO UNLOCK KEYS THAT COULD DISRUPT ADDICTION

From the FMS Global News Desk of Jeanne Hambleton Released: 4-Sep-2014                       Source: University at Buffalo

 

Newswise — BUFFALO, N.Y. – Why do cocaine addicts relapse after months or years of abstinence? The National Institute on Drug Abuse has awarded a University at Buffalo scientist a $2 million grant to conduct research that will provide some answers.

The UB research has the potential to identify novel therapies for treating cocaine addiction and other psychostimulants, for which no effective drug therapy exists.

“Why is it that after staying clean for a month or a year, an addict will, seemingly without reason, start using drugs again?” asks David Dietz, PhD, principal investigator and assistant professor in the Department of Pharmacology and Toxicology in the UB School of Medicine and Biomedical Sciences.

“It is because addiction has rewired the brain.”

The five-year grant focuses on the short- and long-term neurobiological changes in the brain that are induced by addiction.

Dietz explains that an addict’s brain undergoes these dramatic and profound changes, known as neuroplasticity, while being exposed to cocaine.

This plasticity, he says, includes cellular changes that, in turn, control changes in the shape of neurons and the number of connections they have with other neurons, ultimately causing changes in the addict’s behavior.

“These changes persist and become permanent,” Dietz continues. “The addict’s brain is forever rewired.

“The question is, how can we interfere with those changes?” he asks.

“How can we either prevent the rewiring in the addicted state or somehow reverse it?”

A key component of the grant is the ability to understand how the brain changes at different time-points following abstinence from drugs.

“You may need to treat a person who has been in withdrawal for one day very differently from someone who has been in withdrawal for one month or even longer,” explains Dietz.

The UB research, which will be conducted in vivo, is the first to focus on a signaling pathway called transforming growth factor-beta (TGF-beta) signaling, which Dietz says may be a master regulator of pathways previously discovered to be important in addiction.

According to Dietz, TGF-beta is able to control changes both by directly regulating mechanisms that alter the structural reorganization of these neurons, and by controlling long-term, transcriptional effects of genes that maintain these adaptations. This long-term effect sustains the rewiring in the brain and makes it permanent, he says.

 

MAJOR IVORY POACHING ARREST IN MOZAMBIQUE

From FMS Global News Desk of Jeanne Hambleton Released: 8-Sep-2014
Source: Wildlife Conservation Society

Six Elephant Poachers Arrested in Niassa National Reserve

Group Suspected of Killing 39 Elephants This Year Alone

Early Morning Raid Results in Capture of Poaching Ring and Confiscation of Ivory and Guns

Wildlife Conservation Society Praises Joint Operation

PIC>mozambiquepoaching

PHOTO: Confiscated guns, ammunition and elephant tusks in raid in Niassa National Reserve in Mozambique which resulted in the arrest of 6 suspects responsible for killing 39 elephants just in 2014. Raid was conducted Sept. 7.

 

Newswise — Marrupa, Mozambique, Sept. 8, 2014 – A significant arrest of six suspected poachers took place here on Sept. 7 in a joint operation conducted by the Mecula District police, Luwire scouts and Niassa National Reserve WCS scouts. The arrests resulted after a 10-month investigation informed by vital on-the-ground intelligence.

During the early morning raid, 12 tusks and two rifles were confiscated. Two of the tusks, 23 kilograms or 57 pounds each, were from an elephant about 40 years old. The worth of the tusks was estimated at well over U.S. $150,000. The suspects have been charged with such crimes as cooperating with poachers; illegal possession of firearms; participating in poaching; and organized crime. If convicted, all suspects face fines and jail time.

Officials estimated that this group of poachers has killed 39 elephants this year alone based on interviews with the suspects. In addition, this arrest is a major crack down on one of five well-organized groups suspected of poaching elephants in Niassa.

“This is an important raid that has shut down a group of poachers responsible for killing many of Niassa’s elephants” said Alastair Nelson, Director of the WCS Mozambique Program. “In this raid, we have arrested professional poachers, recovered weapons, ivory, ammunition, and gained additional information to crack down on poachers. This is the clear result of an important partnership between the Mozambique government, Luwire, Niassa National Reserve, and WCS. It is partnerships like this that will help us advance important efforts to protect Niassa’s elephants, promote security and governance, and secure national assets for the people of Mozambique.”

WCS President and CEO Cristian Samper, who is currently in Niassa, said: “With this arrest we have charged a shooter, porters and poacher informers who are driving the elephant crisis in Niassa Reserve.

“During a fly-over across a portion of the reserve, I personally witnessed an elephant that had been killed by poachers. The elephant was brought down with an AK-47. We need to combine our strategies and firepower to take on these brutal criminals. WCS extends its appreciation and congratulations to the Mozambique government, especially our partner, the National Administration of Conservation Areas and National Niassa Reserve Warden for their commitment to combat this crisis.

“This work on the ground is part of a three-part strategy to stop the killing of elephants and stop the trafficking and demand for ivory. To solve this crisis, we need to focus efforts in Africa and on the other end of the supply chain in places such as China and the U.S.”

It is estimated there are 13,000 elephants remaining in Niassa National Reserve which is located in northern Mozambique. The reserve holds Mozambique’s largest remaining population of elephants. WCS has been co-managing the reserve with the Mozambique government since 2012.

Back soon Jeanne

 

 

 

 

 

 

Link | Posted on by | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment