NEW ANALYSIS LINKS TREE HEIGHT TO CLIMATE

From the FMS Global News Desk of Jeanne Hambleton Released: 14-Aug-2014   Citations Ecology Source Newsroom: University of Wisconsin-Madison

Newswise — MADISON, Wis. — What limits the height of trees? Is it the fraction of their photosynthetic energy they devote to productive new leaves? Or is it their ability to hoist water hundreds of feet into the air, supplying the green, solar-powered sugar factories in those leaves?

Both factors — resource allocation and hydraulic limitation — might play a role, and a scientific debate has arisen as to which factor (or what combination) actually sets maximum tree height, and how their relative importance varies in different parts of the world.

In research to be published in the journal Ecology — and currently posted online as a preprint — Thomas Givnish, a professor of botany at the University of Wisconsin-Madison, attempts to resolve this debate by studying how tree height, resource allocation and physiology vary with climate in Victoria state, located in southeastern Australia. There, Eucalyptus species exhibit almost the entire global range in height among flowering trees, from 4 feet to more than 300 feet.

“Since Galileo’s time,” Givnish says, “people have wondered what determines maximum tree height: ‘Where are the tallest trees, and why are they so tall?’ Our study talks about the kind of constraints that could limit maximum tree height, and how those constraints and maximum height vary with climate.”

One of the species under study, Eucalyptus regnans — called mountain ash in Australia, but distinct from the smaller and unrelated mountain ash found in the U.S. — is the tallest flowering tree in the world. In Tasmania, an especially rainy part of southern Australia, the tallest living E. regnans is 330 feet tall. (The tallest tree in the world is a coastal redwood in northern California that soars 380 feet above the ground.)

Southern Victoria, Tasmania and northern California all share high rainfall, high humidity and low evaporation rates, underlining the importance of moisture supply to ultra-tall trees. But the new study by Givnish, Graham Farquhar of the Australian National University and others shows that rainfall alone cannot explain maximum tree height.

A second factor, evaporative demand, helps determine how far a given amount of rainfall will go toward meeting a tree’s demands. Warm, dry and sunny conditions cause faster evaporation from leaves, and Givnish and his colleagues found a tight relationship between maximum tree height in old stands in Australia and the ratio of annual rainfall to evaporation. As that ratio increased, so did maximum tree height.

Other factors — like soil fertility, the frequency of wildfires and length of the growing season — also affect tree height. Tall, fast-growing trees access more sunlight and can capture more energy through photosynthesis. They are more obvious to pollinators, and have potential to outcompete other species.

“Infrastructure” — things like wood and roots that are essential to growth but do not contribute to the production of energy through photosynthesis — affect resource allocation, and can explain the importance of the ratio of moisture supply to evaporative demand.

“In moist areas, trees can allocate less to building roots,” Givnish says. “Other things being equal, having lower overhead should allow them to achieve greater height.

“And plants in moist areas can achieve higher rates of photosynthesis, because they can open the stomata on their leaves that exchange gases with the atmosphere. When these trees intake more carbon dioxide, they can achieve greater height before their overhead exceeds their photosynthetic income.”

The constraints on tree height imposed by resource allocation and hydraulics should both increase in drier areas. But Givnish and his team wanted to know the importance of each constraint.

The scientists examined the issue by measuring the isotopic composition of carbon in the wood along the intense rainfall gradient in their study zone. If hydraulic limitation alone were to set maximum tree height, the carbon isotope composition should not vary because all trees should grow up to the point at which hydraulics retards photosynthesis. The isotopic composition should also remain stable if resource allocation alone sets maximum height, because resource allocation does not directly affect the stomata.

But if both factors limit tree height, the heavier carbon isotopes should accumulate in moister areas where faster photosynthesis (enhanced by wide-open stomata) can balance the costs of building more wood in taller trees. Givnish, Farquhar and their colleagues found exactly that, implying that hydraulic limitation more strongly constrains maximum tree height under drier conditions, while resource allocation more strongly constrains height under moist conditions.

Most studies of tree height have focused on finding the tallest trees and explaining why they live where they do, Givnish says.

“This study was the first to ask, ‘How does the maximum tree height vary with the environment, and why?’”

WIRELESS SENSORS AND FLYING ROBOTS: A WAY TO MONITOR DETERIORATING BRIDGES

From the FMS Global News Desk of Jeanne Hambleton  Released: 15-Aug-2014
Source Newsroom:
Tufts University

Newswise — MEDFORD/SOMERVILLE, Mass. – As a recent report from the Obama administration warns that one in four bridges in the United States needs significant repair or cannot handle automobile traffic, Tufts University engineers are employing wireless sensors and flying robots that could have the potential to help authorities monitor the condition of bridges in real time.

Today, bridges are inspected visually by teams of engineers who dangle beneath the bridge on cables or look up at the bridge from an elevated work platform. It is a slow, dangerous, expensive process and even the most experienced engineers can overlook cracks in the structure or other critical deficiencies.

A New Monitoring System for Bridges

In the detection system being developed by Babak Moaveni, an assistant professor of civil and environmental engineering at Tufts School of Engineering, smart sensors are attached permanently to bridge beams and joints. Each sensor can continuously record vibrations and process the recorded signal. Changes in the vibration response can signify damage, he says.

Moaveni, who received a grant from the National Science Foundation (NSF) for his research, is collaborating with Tufts Assistant Professor of Electrical and Computer Engineering Usman Khan to develop a wireless system that would use autonomous flying robots (quad-copters) to hover near the sensors and collect data while taking visual images of bridge conditions. The drone-like robots would transmit data to a central collection point for analysis. Khan received a $400,000 Early Career Award from the NSF earlier this year to explore this technology, which requires addressing significant navigational and communications challenges before it could be a reliable inspection tool.

The recent Obama administration report that analyzed the condition of the transportation infrastructure, points across the country out that 25 percent of the approximately 600,000 bridges are in such a poor state that they are incapable of handling daily automobile traffic. In Massachusetts, more than 50 percent of the 5,136 bridges in use are deficient, the report says.

Moaveni and Khan’s work could help monitor bridges and identify those that are at risk more accurately than current methods. Once installed, the sensors would provide information about the condition of bridges that cannot be obtained by visual inspection alone and would allow authorities to identify and focus on bridges that need immediate attention.

Moaveni installed a network of 10 wired sensors in 2009 on a 145-foot long footbridge on Tufts’ Medford/Somerville campus. In 2011, Moaveni added nearly 5,000 pounds of concrete weights on the bridge deck to simulate the effects of damage on the bridge—a load well within the bridge’s limits. Connected by cables, the sensors recorded readings on vibration levels as pedestrians walked across the span before and after installation of the concrete blocks. From the changes in vibration measurements, Moaveni and his research team could successfully identify the simulated damage on the bridge, validating his vibration-based monitoring framework.

A major goal of his research, Moaveni says, is to develop computer algorithms that can automatically detect damage in a bridge from the changes in its vibration measurements. His work is ongoing.

“Right now, if a bridge has severe damage, we are pretty confident we can detect that accurately. The challenge is building the system so it picks up small, less obvious anomalies.”

Tufts University School of Engineering Located on Tufts’ Medford/Somerville campus, the School of Engineering offers a rigorous engineering education in a unique environment that blends the intellectual and technological resources of a world-class research university with the strengths of a top-ranked liberal arts college.

Close partnerships with Tufts’ excellent undergraduate, graduate and professional schools, coupled with a long tradition of collaboration, provide a strong platform for interdisciplinary education and scholarship.

The School of Engineering’s mission is to educate engineers committed to the innovative and ethical application of science and technology in addressing the most pressing societal needs, to develop and nurture twenty-first century leadership qualities in its students, faculty, and alumni, and to create and disseminate transformational new knowledge and technologies that further the well-being and sustainability of society in such cross-cutting areas as human health, environmental sustainability, alternative energy, and the human-technology interface.

SALT CONTRIBUTES TO 1,650,000 DEATHS GLOBALLY EACH YEAR

From the FMS Global News Desk of Jeanne Hambleton  Posted on August 13, 2014                              By Stone Hearth News Eureka Alert

BOSTON — More than 1.6 million cardiovascular-related deaths per year can be attributed to sodium consumption above the World Health Organization’s recommendation of 2.0g (2,000mg) per day, researchers have found in a new analysis evaluating populations across 187 countries. The findings were published in the August 14 issue of The New England Journal of Medicine.

“High sodium intake is known to increase blood pressure, a major risk factor for cardiovascular diseases including heart disease and stroke,” said first and corresponding author Dariush Mozaffarian, M.D.,

Dr.P.H., dean of the Friedman School of Nutrition Science and Policy at Tufts University, who led the research while at the Harvard School of Public Health. “However, the effects of excess sodium intake on cardiovascular diseases globally by age, sex, and nation had not been well established.”

The researchers collected and analyzed existing data from 205 surveys of sodium intake in countries representing nearly three-quarters of the world’s adult population, in combination with other global nutrition data, to calculate sodium intakes worldwide by country, age, and sex. Effects of sodium on blood pressure and of blood pressure on cardiovascular diseases were determined separately in new pooled meta-analyses, including differences by age and race. These findings were combined with current rates of cardiovascular diseases around the world to estimate the numbers of cardiovascular deaths attributable to sodium consumption above 2.0g per day.

The researchers found the average level of global sodium consumption in 2010 to be 3.95g per day, nearly double the 2.0g recommended by the World Health Organization. All regions of the world were above recommended levels, with regional averages ranging from 2.18g per day in sub-Saharan Africa to 5.51g per day in Central Asia. In their meta-analysis of controlled intervention studies, the researchers found that reduced sodium intake lowered blood pressure in all adults, with the largest effects identified among older individuals, blacks, and those with pre-existing high blood pressure.

“These 1.65 million deaths represent nearly one in 10 of all deaths from cardiovascular causes worldwide. No world region and few countries were spared,” added Mozaffarian, who chairs the Global Burden of Diseases, Nutrition, and Chronic Disease Expert Group, an international team of more than 100 scientists studying the effects of nutrition on health and who contributed to this effort.

“These new findings inform the need for strong policies to reduce dietary sodium in the United States and across the world.”

In the United States, average daily sodium intake was 3.6g, 80 percent higher than the amount recommended by the World Health Organization. [The federal government's Dietary Guidelines for Americans recommend limiting intake of sodium to no more than 2,300mg (2.3g) per day.] The researchers found that nearly 58,000 cardiovascular deaths each year in the United States could be attributed to daily sodium consumption greater than 2.0g. Sodium intake and corresponding health burdens were even higher in many developing countries.

“We found that four out of five global deaths attributable to higher than recommended sodium intakes occurred in middle- and low-income countries,” added John Powles, M.B., B.S., last author and honorary senior visiting fellow in the department of public health and primary care at the University of Cambridge.

“Programs to reduce sodium intake could provide a practical and cost effective means for reducing premature deaths in adults around the world.”

The authors acknowledge that their results utilize estimates based on urine samples, which may underestimate true sodium intakes. Additionally, some countries lacked data on sodium consumption, which was estimated based on other nutritional information; and, because the study focuses on cardiovascular deaths, the findings may not reflect the full health impact of sodium intake, which is also linked to higher risk of nonfatal cardiovascular diseases, kidney disease and stomach cancer, the second most-deadly cancer worldwide.

This research was supported by a grant from the Bill and Melinda Gates Foundation.

Mozaffarian, D; Fahimi, S; Singh, G; Micha, R; Khatibzadeh, S; Engell, R; Lim, S; Goodarz, D; Ezzati, M; and Powles, J. “Global sodium consumption and death from cardiovascular causes.” N Engl J Med 2014. 371:7, 624-634. DOI: 10.1056/NEJMoa130412

About the Friedman School of Nutrition Science and Policy

The Gerald J. and Dorothy R. Friedman School of Nutrition Science and Policy at Tufts University is the only independent school of nutrition in the United States. The school’s eight degree programs – which focus on questions relating to nutrition and chronic diseases, molecular nutrition, agriculture and sustainability, food security, humanitarian assistance, public health nutrition, and food policy and economics – are renowned for the application of scientific research to national and international policy.

Back tomorrow – Jeanne

Link | Posted on by | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

PATIENTS SELF-PRESCRIBE ‘CANCER-PREVENTING’ ASPIRIN AS PHARMACY SALES SOAR

PATIENTS SELF-PRESCRIBE ‘CANCER-PREVENTING’ ASPIRIN AS PHARMACY SALES SOAR

From the FMS Global News Desk of Jeanne Hambleton      3 August  2014  By Caroline Price Pulse Today

BREAKING NEWS

 

Exclusive Pharmacies have reported big hikes in aspirin sales in the past week after UK academics called for people in late-middle age to start taking daily doses of the drug to prevent stomach and bowel cancer, Pulse has learnt.

Health retailer Superdrug reported a doubling in the amount of low-dose aspirin sold last week in its stores, recording a 229% increase in sales on the preceding week.

The figures came as UK experts claimed the benefits of taking a low dose of aspirin daily to prevent stomach and bowel cancer outweigh any risks for most people aged 50-65.

The researchers had warned there were still some doubts regarding the evidence – in particular over what dose should be taken and for how long – and advised people to consult their GP before choosing to self-prescribe aspirin.

But one Superdrug store in Bolton last week reported a massive 500% increase in sales after the announcement, a finding reflected by a big jump in national sales of 75 mg aspirin across Superdrug stores nationally compared with the previous week – and a 400% increase in the London region.

A spokesperson for Superdrug told Pulse: ‘Aspirin sales were up 229% nationally week on week, on aspirin 75mg last week in comparison to the week before. In London sales were up 400% week-on-week.’

Elsewhere independent chain LloydsPharmacy told Pulse they had noticed a smaller but still marked increase in sales nationally, with a 27% increase in the volume of sales compared with the same week last year, and a 36% increase in volume compared with the preceding week.

Boots declined to share information on its aspirin sales while Day Lewis, Morrisons and Whitworth said they had not seen a big change in the overall pattern of sales.

GP leaders stressed there is still not enough evidence to recommend anyone takes aspirin routinely for cancer prevention – but said it was more appropriate for the public to consult their local pharmacist about the pros and cons, rather than visiting over-stretched GPs.

Dr Andrew Green, chair of the GPC clinical and prescribing subcommittee, said: ‘I would be encouraging people to have a chat with their pharmacist about it rather than their GP. Whether someone should be taking aspirin or not is well within the pharmacists’ competence.’

He added: ‘The advice from a GP I would suggest is at the moment is we don’t have enough evidence to recommend it for everybody. If a patient wants to disregard that and take it then they should still get some advice – but the pharmacist can advise them if there is anything in their past medical history or their current prescriptions that means they shouldn’t take aspirin.’

Dr Richard West agreed that while people should get advice before deciding to take aspirin, consulting a GP may not be necessary.

Dr West said: ‘It’s a difficult balance – there are undoubtedly some risks from taking it and therefore it is worth discussing it with an appropriate healthcare professional beforehand.

‘However, as we know general practice is under a lot of pressure at the moment and therefore if a pharmacist felt capable of giving that advice then I think that would be perfectly appropriate.’

A spokesperson for the Royal Pharmaceutical Society said: ‘Pharmacists are well practiced in dealing with requests for treatments following a big media story. The links between cancer prevention and aspirin are not new but as yet haven’t lead to a change in indication or licence of aspirin.

‘Although aspirin is often portrayed as a wonder drug, it can cause serious harms, especially in people with pre-existing conditions such as stomach ulcers.’

 

ASPIRIN FOR PRIMARY PREVENTION IN DIABETES ‘SHOULD BE RESTRICTED’

From the FMS Global News Desk of Jeanne Hambleton 9 May 2013             By Caroline Price Pulse Today

 

Daily low-dose aspirin treatment does not prevent cardiovascular events or death in people with type 2 diabetes and no previous cardiovascular disease (CVD), and may even increase the risk of coronary heart disease (CHD) in female patients, shows a large cohort study.

The study

Researchers analysed the outcomes of 18,646 men and women with type 2 diabetes and no CVD history, aged between 30 and 80 years, over an average of four years beginning in 2006, using data from the Swedish National Diabetes Registry. In all, 4,608 patients received low-dose (75 mg/day) aspirin treatment while 14,038 patients received no aspirin treatment, giving 69,743 aspirin person-years and 102,754 non-aspirin person-years of follow-up.

The findings

Aspirin treatment was not associated with any benefit in terms of cardiovascular outcomes or mortality, after propensity score and multivariable adjustment. Aspirin-treated and non-aspirin-treated groups had equivocal risks of the outcomes non-fatal or fatal CVD, fatal CVD, fatal CHD, non-fatal or fatal stroke, fatal stroke and total mortality.

Patients who received aspirin had a significant 19% increased risk of non-fatal or fatal CHD; further analysis stratifying the group by gender showed this was driven by a significant 41% increased risk in women, while there was no increased risk in men. Women also had a 28% increased risk of fatal or non-fatal CVD.

There was also a borderline significant 41% increase in risk of non-fatal or fatal total haemorrhage with aspirin, but this association became weaker when broken down by gender.

Risks of cerebral or ventricular bleeding did not differ between groups, but aspirin use was associated with a significant 64% increased risk of ventricular ulcer, driven by a 2.3-fold increased in women, while no increased risk was found in men.

Furthermore, the effects of aspirin on these endpoints were similar in patients with high estimated CV risk (five-year risk 15% or higher) and those with low estimated CV risk (five-year risk below 15%).

What this means for GPs.

The results support current guidance from the European Society of Cardiology and the European Association for the Study of Diabetes that do not recommend primary prevention with aspirin in patients with diabetes, but conflict with the NICE type 2 diabetes guidelines, which recommend primary prevention with 75 mg/day aspirin in patients aged 50 years or older if their blood pressure is below 145/90 mm/Hg and in patients younger than 50 who have another significant cardiovascular risk factor.

The authors conclude: ‘The present study shows no association between aspirin use and beneficial effects on risks of CVD or mortality in patients with diabetes and no previous CVD and supports the trend towards a more restrictive use of aspirin in these patients, also underlined by the increased risk of ventricular ulcer associated with aspirin.’

 

GPS TOLD TO REVIEW ASPIRIN USE IN PATIENTS WITH ATRIAL FIBRILLATION

From the FMS Global News Desk of Jeanne Hambleton 18 June 2014        By Caroline Price Pulse Today

 

GPs are to be tasked with reviewing all their patients with atrial fibrillation who are taking aspirin, under final NICE guidance published today that recommends anticoagulant therapy as the only option for stroke prevention in these patients.

The new guidance means GPs will need to start advising patients with atrial fibrillation who are on aspirin to stop taking it, and encourage them to take warfarin or one of the newer oral anticoagulants.

NICE said just over a fifth of the UK population with atrial fibrillation – around 200,000 patients – are currently on aspirin, many of whom should be able to be switched onto anticoagulation therapy of some sort.

GP leaders have warned that practices do not have the capacity to proactively call in patients, and suggested that changing management of this number of patients could only be achieved through incentive schemes such as enhanced services or the QOF.

But NICE advisors and CCG cardiology leads have claimed that GPs can do the reviews opportunistically over the coming year.

The final publication comes after it emerged the GPC had raised serious concerns over the complexity of the draft guidance – and warned CCGs would need to consider developing enhanced services to support GPs in delivering it.

Dr Andrew Green, chair of the GPC’s clinical and prescribing subcommittee, told Pulse GPs should feel they can refer patients on if they are not able to deal with all the changes as part of annual reviews.

Dr Green said: ‘I would expect GPs as part of their normal work to consider whether [atrial fibrillation] patients not on anticoagulation should be, in the light of the new guidance. If they should be, then the choice is between anticoagulation with warfarin or one of the newer agents, and if GPs do not feel they have the expertise or resources to do this properly, they have a duty to refer to someone who can.’

He added: ‘Commissioners need to predict this activity and may want to commission a service specifically for this which is more cost-effective than a traditional out-patient referral.’

Local GP leaders told Pulse practices would not take a systematic approach to reviewing and updating patients’ medications unless the work was specifically funded.

Dr Peter Scott, a GP in Solihull and chair of the GPC in West Midlands, said: ‘It’s not going to happen unless it’s resourced and incentivised as part of a DES or LES, or through the QOF – until then I don’t think a systematic approach to this will happen.’

But Dr Matthew Fay, a GP in Shipley, Yorkshire, and member of the NICE guidelines development group, acknowledged the workload concerns and said GPs should be advised to review patients opportunistically.

Dr Fay said: ‘I think it’s perfectly acceptable [to review patients opportunistically]. A lot of these patients who are at risk in this situation we will be reviewing because of their hypertension and other comorbidities, and those patients on aspirin should have that discussed at the next presentation.’

He added: ‘I think anticoagulation is an intimidating topic for clinicians – both in primary and secondary care. I would suggest one person in each practice one clinician is involved with the management of the anticoagulated patients – whether that’s keeping a check on them during the warfarin clinic or being the person who initiates the novel oral anticoagulants.

‘If GPs feel uncomfortable with [managing anticoagulation] then they should be approaching the CCG executive to say, “we need a service to provide expert support for this”. The CCG may choose to come up with an enhanced service – but then whoever is providing the service needs to make sure they are well versed in use of the latest anticoagulants.’

The new guidance says GPs must use the CHA2DS2-VASc score to assess patients’ stroke risk and advise any patients with a score of at least one (men) or two (women) to go onto anticoagulation therapy with warfarin, or another vitamin K antagonist, or with one of the novel oral anticoagulants (NOACs) dabigatran, apixaban or rivaroxaban.

It adds that aspirin should no longer be prescribed solely for stroke prevention to patients with atrial fibrillation.

The HAS-BLED score should be used to assess patients’ risk of bleeding as part of the decision over which anticoagulant to choose.

In the only major revision to the draft guidance, aspirin is no longer to be considered even as part of dual antiplatelet therapy for patients at particularly high bleeding risk, as this combination has now also been ruled out.

 

BENEFITS OF ASPIRIN A DAY FOR CANCER PREVENTION IN MIDDLE-AGED PEOPLE ‘OUTWEIGH HARMS

This story was published a few days ago

 

 

Posted in Global News | Tagged , , , , | Leave a comment

NASA’S SPITZER TELESCOPE WITNESSES ASTEROID SMASHUP

From the FMS Global News Desk of Jeanne Hambleton                                          NASA Released 28 August 2014

NASA’s Spitzer Space Telescope has spotted an eruption of dust around a young star, possibly the result of a smashup between large asteroids. This type of collision can eventually lead to the formation of planets.

Scientists had been regularly tracking the star, called NGC 2547-ID8, when it surged with a huge amount of fresh dust between August 2012 and January 2013.

“We think two big asteroids crashed into each other, creating a huge cloud of grains the size of very fine sand, which are now smashing themselves into smithereens and slowly leaking away from the star,” said lead author and graduate student Huan Meng of the University of Arizona, Tucson.

While dusty aftermaths of suspected asteroid collisions have been observed by Spitzer before, this is the first time scientists have collected data before and after a planetary system smashup. The viewing offers a glimpse into the violent process of making rocky planets like ours.

Rocky planets begin life as dusty material circling around young stars. The material clumps together to form asteroids that ram into each other. Although the asteroids often are destroyed, some grow over time and transform into proto-planets.

After about 100 million years, the objects mature into full-grown, terrestrial planets. Our moon is thought to have formed from a giant impact between proto-Earth and a Mars-size object.

In the new study, Spitzer set its heat-seeking infrared eyes on the dusty star NGC 2547-ID8, which is about 35 million years old and lies 1,200 light-years away in the Vela constellation. Previous observations had already recorded variations in the amount of dust around the star, hinting at possible ongoing asteroid collisions.

In hope of witnessing an even larger impact, which is a key step in the birth of a terrestrial planet, the astronomers turned to Spitzer to observe the star regularly. Beginning in May 2012, the telescope began watching the star, sometimes daily.

A dramatic change in the star came during a time when Spitzer had to point away from NGC 2547-ID8 because our sun was in the way. When Spitzer started observing the star again five months later, the team was shocked by the data they received.

“We not only witnessed what appears to be the wreckage of a huge smashup, but have been able to track how it is changing — the signal is fading as the cloud destroys itself by grinding its grains down so they escape from the star,” said Kate Su of the University of Arizona and co-author on the study.

“Spitzer is the best telescope for monitoring stars regularly and precisely for small changes in infrared light over months and even years.”

A very thick cloud of dusty debris now orbits the star in the zone where rocky planets form. As the scientists observe the star system, the infrared signal from this cloud varies based on what is visible from Earth.

For example, when the elongated cloud is facing us, more of its surface area is exposed and the signal is greater. When the head or the tail of the cloud is in view, less infrared light is observed. By studying the infrared oscillations, the team is gathering first-of-its-kind data on the detailed process and outcome of collisions that create rocky planets like Earth.

“We are watching rocky planet formation happen right in front of us,” said George Rieke, a University of Arizona co-author of the new study. “This is a unique chance to study this process in near real-time.”

The team is continuing to keep an eye on the star with Spitzer. They will see how long the elevated dust levels persist, which will help them calculate how often such events happen around this and other stars, and they might see another smashup while Spitzer looks on.

The results of this study are posted online Thursday in the journal Science.

NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California, manages the Spitzer Space Telescope mission for NASA’s Science Mission Directorate in Washington. Science operations are conducted at the Spitzer Science Center at the California Institute of Technology in Pasadena. Spacecraft operations are based at Lockheed Martin Space Systems Company in Littleton, Colorado. Data are archived at the Infrared Science Archive housed at the Infrared Processing and Analysis Center at Caltech. Caltech manages JPL for NASA.

This artist’s concept shows the immediate aftermath of a large asteroid impact around NGC 2547-ID8, a 35-million-year-old sun-like star. NASA’s Spitzer Space Telescope witnessed a giant surge in dust around the star, likely the result of two asteroids colliding.   Image Credit: NASA/JPL-Caltech

Spitzer Asteroid SAVE SAVE      14-226_0

 

Building Planets Through Collisions

Planets, including those like our own Earth, form from epic collisions between asteroids and even bigger bodies, called proto-planets. Sometimes the colliding bodies are ground to dust, and sometimes they stick together to ultimately form larger, mature planets.

This artist’s conception shows one such smash-up, the evidence for which was collected by NASA’s Spitzer Space Telescope. Spitzer’s infrared vision detected a huge eruption around the star NGC 2547-ID8 between August 2012 and 2013. Scientists think the dust was kicked up by a massive collision between two large asteroids. They say the smashup took place in the star’s “terrestrial zone,” the region around stars where rocky planets like Earth take shape.

NGC 2547-ID8 is a sun-like star located about 1,200 light-years from Earth in the constellation Vela. It is about 35 million years old, the same age our young sun was when its rocky planets were finally assembled via massive collisions — including the giant impact on proto-Earth that led to the formation of the moon. The recent impact witnessed by Spitzer may be a sign of similar terrestrial planet building. Near-real-time studies like these help astronomers understand how the chaotic process works.

 

SPARKS FLY AS NASA PUSHES THE LIMITS OF 3-D PRINTING TECHNOLOGY

From FMS Global News Desk of Jeanne Hambleton  NASA  August 28, 2014

NASA has successfully tested the most complex rocket engine parts ever designed by the agency and printed with additive manufacturing, or 3-D printing, on a test stand at NASA’s Marshall Space Flight Center in Huntsville, Alabama.

NASA engineers pushed the limits of technology by designing a rocket engine injector –a highly complex part that sends propellant into the engine — with design features that took advantage of 3-D printing. To make the parts, the design was entered into the 3-D printer’s computer. The printer then built each part by layering metal powder and fusing it together with a laser, a process known as selective laser melting.

The additive manufacturing process allowed rocket designers to create an injector with 40 individual spray elements, all printed as a single component rather than manufactured individually. The part was similar in size to injectors that power small rocket engines and similar in design to injectors for large engines, such as the RS-25 engine that will power NASA’s Space Launch System (SLS) rocket, the heavy-lift, exploration class rocket under development to take humans beyond Earth orbit and to Mars.

“We wanted to go a step beyond just testing an injector and demonstrate how 3-D printing could revolutionize rocket designs for increased system performance,” said Chris Singer, director of Marshall’s Engineering Directorate. “The parts performed exceptionally well during the tests.”

Using traditional manufacturing methods, 163 individual parts would be made and then assembled. But with 3-D printing technology, only two parts were required, saving time and money and allowing engineers to build parts that enhance rocket engine performance and are less prone to failure.

Two rocket injectors were tested for five seconds each, producing 20,000 pounds of thrust. Designers created complex geometric flow patterns that allowed oxygen and hydrogen to swirl together before combusting at 1,400 pounds per square inch and temperatures up to 6,000 degrees Fahrenheit. NASA engineers used this opportunity to work with two separate companies — Solid Concepts in Valencia, California, and Directed Manufacturing in Austin, Texas. Each company printed one injector.

“One of our goals is to collaborate with a variety of companies and establish standards for this new manufacturing process,” explained Marshall propulsion engineer Jason Turpin. “We are working with industry to learn how to take advantage of additive manufacturing in every stage of space hardware construction from design to operations in space. We are applying everything we learn about making rocket engine components to the Space Launch System and other space hardware.”

Additive manufacturing not only helped engineers build and test a rocket injector with a unique design, but it also enabled them to test faster and smarter. Using Marshall’s in-house capability to design and produce small 3-D printed parts quickly, the propulsion and materials laboratories can work together to apply quick modifications to the test stand or the rocket component.

“Having an in-house additive manufacturing capability allows us to look at test data, modify parts or the test stand based on the data, implement changes quickly and get back to testing,” said Nicholas Case, a propulsion engineer leading the testing. “This speeds up the whole design, development and testing process and allows us to try innovative designs with less risk and cost to projects.”

Marshall engineers have tested increasingly complex injectors, rocket nozzles and other components with the goal of reducing the manufacturing complexity and the time and cost of building and assembling future engines. Additive manufacturing is a key technology for enhancing rocket designs and enabling missions into deep space.

3-D Printed Rocket Injector Roars to Life: The most complex 3-D printed rocket injector ever built by NASA roars to life on the test stand at NASA’s Marshall Space Flight Center in Huntsville,Alabama.

Engineers just completed hot-fire testing with two 3-D printed rocket injectors. Certain features of the rocket components were designed to increase rocket engine performance. The injector mixed liquid oxygen and gaseous hydrogen together, which combusted at temperatures over 6,000 degrees Fahrenheit, producing more than 20,000 pounds of thrust.

Image Credit: NASA photo/David Olive

NASA 3D printing          14-233b_0

 

WALKING FISH REVEAL HOW OUR ANCESTORS EVOLVED ONTO LAND

From the FMS Global News Desk of Jeanne Hambleton Embargoed: 27-Aug-2014 Citations Nature   Source : McGill University

About 400 million years ago a group of fish began exploring land and evolved into tetrapods – today’s amphibians, reptiles, birds, and mammals.

But just how these ancient fish used their fishy bodies and fins in a terrestrial environment and what evolutionary processes were at play remain scientific mysteries.

Researchers at McGill University published in the journal Nature, turned to a living fish, called Polypterus, to help show what might have happened when fish first attempted to walk out of the water.

Polypterus is an African fish that can breathe air, ‘walk’ on land, and looks much like those ancient fishes that evolved into tetrapods. The team of researchers raised juvenile Polypterus on land for nearly a year, with an aim to revealing how these ‘terrestrialized’ fish looked and moved differently.

“Stressful environmental conditions can often reveal otherwise cryptic anatomical and behavioural variation, a form of developmental plasticity”, says Emily Standen, a former McGill post-doctoral student who led the project, now at the University of Ottawa.

“We wanted to use this mechanism to see what new anatomies and behaviours we could trigger in these fish and see if they match what we know of the fossil record.”

Remarkable anatomical changes
The fish showed significant anatomical and behavioural changes. The terrestrialized fish walked more effectively by placing their fins closer to their bodies, lifted their heads higher, and kept their fins from slipping as much as fish that were raised in water.

“Anatomically, their pectoral skeleton changed to became more elongate with stronger attachments across their chest, possibly to increase support during walking, and a reduced contact with the skull to potentially allow greater head/neck motion,” says Trina Du, a McGill Ph.D. student and study collaborator.

“Because many of the anatomical changes mirror the fossil record, we can hypothesize that the behavioural changes we see also reflect what may have occurred when fossil fish first walked with their fins on land”, says Hans Larsson, Canada Research Chair in Macroevolution at McGill and an Associate Professor at the Redpath Museum.

Unique experiment
The terrestrialized Polypterus experiment is unique and provides new ideas for how fossil fishes may have used their fins in a terrestrial environment and what evolutionary processes were at play.

Larsson adds, “This is the first example we know of that demonstrates developmental plasticity may have facilitated a large-scale evolutionary transition, by first accessing new anatomies and behaviours that could later be genetically fixed by natural selection”.

The study was conducted by Emily Standen, University of Ottawa, and Hans Larsson, Trina Du at McGill University.

This study was supported by the Canada Research Chairs Program, Natural Sciences and Engineering Research Council of Canada (NSERC) and Tomlinson Post-doctoral fellowship. McGill University

FISH.2.Polypterussenegalusswimming2-AntoineMorin

Image Polypterussenegalus swimming by Antoine Morin

 See you soon Jeanne

 

 

Link | Posted on by | Tagged , , , , , , , , , , , , , , , , , , , , | Leave a comment

THE SCIENCE OF BEER AND COFFEE ACCORDING TO A UAB CHEMIST

From the FMS Global News Desk of Jeanne Hambleton Released: 29-Aug-2014
Source Newsroom: University of Alabama at Birmingham

 

Newswise — BIRMINGHAM, Ala. – University of Alabama at Birmingham professor Tracy Hamilton, Ph.D., is applying his chemistry expertise to two popular beverages: beer and coffee.

An associate professor in the UAB Department of Chemistry, Hamilton lectures for the American Chemical Society around the country about how the chemical makeup of these drinks impacts the characteristics of the products in final form.

“It is a really popular topic,” Hamilton said. The scientists “love talking about beer and, of course, drinking it.”

The theoretical chemist, who typically researches quantum mechanics, discovered a passion for zymurgy, the science of fermentation. His interest in flavorful drinks expanded to coffee when a member of his brew club began roasting beans. As with beer, Hamilton studied the chemical makeup of the beverage and began giving lectures on how coffee is cultivated, roasted and brewed.

Even career chemists learn something in these lectures, Hamilton says — such as the fact that the flavor and aromatic compounds in beer and coffee are also present in other foods. Damascenone, for instance, which offers baked apple notes, is marketed as a flavoring agent; it is a product of the Maillard reaction, the same chemical reaction that browns steaks and bread crusts as they heat.

Flavors in beer come from a surprising number of sources. One is the variety of sugar-type compounds in the beverage, which give brews their sweetness. Other than the sugars, much of a particular brew’s complexity of tastes comes from the hops used in its production.

The essential oils in hops can “contribute a lot of flavors like citrus, grapefruit and orange,” Hamilton said. “They’re what you smell.”

Compounds such as geraniol and citral are extremely common in beer, giving it a geranium-like or citrusy smell, respectively.

Some flavor and aroma compounds can be less savory. If a beer does not ferment long enough or correctly, it may taste like Granny Smith apples, thanks to acetaldehyde. This compound, produced as an intermediate step in fermentation, is not pleasant, Hamilton says. Another undesirable compound is 2-transnonenal, which tastes like damp paper.

However, some styles of beer do not have much hop flavor at all and derive a lot of their flavors from the brewing yeast.

“In ales, there are a lot of ester compounds that come across as pretty fruity,” Hamilton said. “It is all about the balance. You do not want it to be overwhelming.”

In contrast, much of the flavor in coffee comes from pyrazines. These small aromatic compounds form much of the initial flavor of freshly brewed coffee.

“That is one reason coffee is so much better when it is fresh,” Hamilton said.

Other aspects of coffee’s flavor come from the sugars that are broken down by roasting.

At the end of the day, the most desirable trait he looks for in a beer is that “every sip is the same,” he said.

About UAB
Known for its innovative and interdisciplinary approach to education at both the graduate and undergraduate levels, the University of Alabama at Birmingham is an internationally renowned research university and academic medical center and the state of Alabama’s largest employer, with some 23,000 employees and an economic impact exceeding $5 billion annually on the state. The five pillars of UAB’s mission include education, research, patient care, community service and economic development. UAB: Knowledge that will change your world

 

SPEAKING TWO LANGUAGES BENEFITS THE AGING BRAIN

From the FMS Global News Desk of Jeanne Hambleton Science Newsroom Wiley ResearchCitation: : “Does Bilingualism Influence Cognitive Aging?” Thomas H Bak, Jack J Nissan, Michael M Allerhand and Ian J Deary. Annals of Neurology

 

New research reveals that bilingualism has a positive effect on cognition later in life. Findings published in Annals of Neurology, a journal of the American Neurological Association and Child Neurology Society, show that individuals who speak two or more languages, even those who acquired the second language in adulthood, may slow down cognitive decline from aging.

Bilingualism is thought to improve cognition and delay dementia in older adults. While prior research has investigated the impact of learning more than one language, ruling out “reverse causality” has proven difficult. The crucial question is whether people improve their cognitive functions through learning new languages or whether those with better baseline cognitive functions are more likely to become bilingual.

“Our study is the first to examine whether learning a second language impacts cognitive performance later in life while controlling for childhood intelligence,” says lead author Dr. Thomas Bak from the Centre for Cognitive Aging and Cognitive Epidemiology at the University of Edinburgh.

For the current study, researchers relied on data from the Lothian Birth Cohort 1936, comprised of 835 native speakers of English who were born and living in the area of Edinburgh, Scotland. The participants were given an intelligence test in 1947 at age 11 years and retested in their early 70s, between 2008 and 2010. Two hundred and sixty two participants reported to be able to communicate in at least one language other than English. Of those, 195 learned the second language before age 18, 65 thereafter.

Findings indicate that those who spoke two or more languages had significantly better cognitive abilities compared to what would be expected from their baseline. The strongest effects were seen in general intelligence and reading. The effects were present in those who acquired their second language early as well as late.

The Lothian Birth Cohort 1936 forms the Disconnected Mind project at the University of Edinburgh, funded by Age UK. The work was undertaken by The University of Edinburgh Centre for Cognitive Ageing and Cognitive Epidemiology, part of the cross council Lifelong Health and Wellbeing Initiative (MR/K026992/1) and has been made possible thanks to funding from the Biotechnology and Biological Sciences Research Council (BBSRC) and Medical Research Council (MRC).

“The Lothian Birth Cohort offers a unique opportunity to study the interaction between bilingualism and cognitive aging, taking into account the cognitive abilities predating the acquisition of a second language” concludes Dr. Bak.

“These findings are of considerable practical relevance. Millions of people around the world acquire their second language later in life. Our study shows that bilingualism, even when acquired in adulthood, may benefit the aging brain.”

After reviewing the study, Dr. Alvaro Pascual-Leone, an Associate Editor for Annals of Neurology and Professor of Medicine at Harvard Medical School in Boston, Mass. said, “The epidemiological study by Dr. Bak and colleagues provides an important first step in understanding the impact of learning a second language and the aging brain. This research paves the way for future causal studies of bilingualism and cognitive decline prevention.”

About the Journal

Annals of Neurology, the official journal of the American Neurological Association and the Child Neurology Society, publishes articles of broad interest with potential for high impact in understanding the mechanisms and treatment of diseases of the human nervous system. All areas of clinical and basic neuroscience, including new technologies, cellular and molecular neurobiology, population sciences, and studies of behavior, addiction, and psychiatric diseases are of interest to the journal. The journal is published by Wiley on behalf of the American Neurological Association and Child Neurology Society.

About Wiley

Wiley is a global provider of content-enabled solutions that improve outcomes in research, education, and professional practice. Our core businesses produce scientific, technical, medical, and scholarly journals, reference works, books, database services, and advertising; professional books, subscription products, certification and training services and online applications; and education content and services including integrated online teaching and learning resources for undergraduate and graduate students and lifelong learners.

Founded in 1807, John Wiley & Sons, Inc., has been a valued source of information and understanding for more than 200 years, helping people around the world meet their needs and fulfill their aspirations. Wiley and its acquired companies have published the works of more than 450 Nobel laureates in all categories: Literature, Economics, Physiology or Medicine, Physics, Chemistry, and Peace. Wiley’s global headquarters are located in Hoboken, New Jersey, with operations in the U.S., Europe, Asia, Canada, and Australia.

 

THE CHEMISTRY BEHIND BBQ

From the FMS Global News Desk of Jeanne Hambleton 25-Aug-2014                                                     Source: Institute of Food Technologists (IFT)

 

Newswise — It is that time of the year again when people are starting to fire up the grill for tailgating season! IFT spokesperson Guy Crosby, PhD, CFS provides insight into the food science behind BBQ. Crosby addresses how a marinade works to keep your meat tender, how smoking can infuses new flavors into meat, searing and more.

How does using a marinade make meat more tender?

There are some misconceptions around this topic, typically only salt or salty ingredients such as soy sauce make the biggest difference. It really depends on the type of meat and the muscle structure. The protein that forms when the salt breaks the muscle down helps to retain moisture, and makes the tissue a little looser.

Acid-based marinades such as lime, lemon juice or vinegar do not have a huge effect. They will help break down some connective tissue and flavor the meat, but it is really only on the surface.

Does searing a meat before cooking help keep the juices inside?

Searing does not trap or keep moisture inside a piece of meat; it is an old kitchen myth.

Why does a piece of meat need to rest before cutting it?

When you cook meat the muscle fibers and the proteins begin to shrink and squeeze out moisture. If you immediately slice a piece of meat, the moisture that has been squeezed out of the muscle fibers will run out. But if you let it sit for 15 to 20 minutes depending on the size and thickness of the meat, the fibers start to soak back up some of that moisture.

What is the Maillard Reaction?

A French scientist in 1912 discovered certain proteins and amino acids react with certain kinds of sugars and cause browning. When meat is browned it forms hundreds of very potent flavor molecules that affect its aroma and taste.

 Why cook low and slow?

The lower you cook the temperature, the less the fibers will shrink, the less tough the meat will be because it would not lose as much moisture. Typically tough cuts of meat are cooked this way to keep the meat moist. Cooking the meat slowly breaks down tough connective tissue to form gelatin, which binds moisture. The amount of fat also helps because it breaks up the protein, lubricates the meat and makes it tenderer.

When smoking a piece of meat, how does the wood flavor get infused into it?

The oxygen breaks down the lignin in wood and releases a smoky aroma that sticks to the moist surface of the meat, flavoring it.

What is an easy thickening agent to use at home to thicken a BBQ sauce?

The most common one would be cornstarch. The best way is to add cornstarch to room temperature water first, mix well, and then add the combination to the sauce and heat. Flour is another option.

This fact sheet also offers food safety tips for tailgating and other outdoor dining experiences!

Source: Guy Crosby, PhD, CFS, IFT Spokesperson,
Additional Sources: Rosemary Extract May Prevent Formation of Carcinogens on Beef, Journal of Food Science
IFT Food Facts: Outdoor Cooking Food Safety

 

 

Link | Posted on by | Tagged , , , , , , , , , , , , , , , | Leave a comment

SINGLE ANIMAL TO HUMAN TRANSMISSION EVENT RESPONSIBLE FOR 2014 EBOLA OUTBREAK

NIH-funded scientist uses latest genomic technology to make discovery

From the FMS Global News Desk of Jeanne Hambleton   Immediate Release: Friday, August 29, 2014

Scientists used advanced genomic sequencing technology to identify a single point of infection from an animal reservoir to a human in the current Ebola outbreak in West Africa. This research has also revealed the dynamics of how the Ebola virus has been transmitted from human to human, and traces how the genetic code of the virus is changing over time to adapt to human hosts. Pardis Sabeti, M.D., Ph.D, a 2009 National Institutes of Health Director’s New Innovator awardee and her team carried out the research.

“Dr. Sabeti’s research shows the power of using genomic analysis to track emerging viral outbreaks,” said NIH Director Francis S. Collins, M.D., Ph.D.

“This ability produces valuable information that can help inform public health decisions and actions.”

The 2014 Ebola outbreak is now the largest outbreak in history, with current estimates of 2,473 infections and 1350 deaths since it began in late December 2013 according to the World Health Organization. This outbreak is also the first in West Africa and the first to affect urban areas. There are no approved drugs for Ebola virus disease, though prompt diagnosis and aggressive supportive care can improve survival. The disease is characterized by high fever, headache, body aches, intense weakness, stomach pain, and lack of appetite. This is followed by vomiting, diarrhea, rash, impaired kidney and liver function and in some cases, internal and external bleeding. 

To better understand why this outbreak is larger than previous outbreaks, Dr. Sabeti, senior associate member of the Broad Institute, Cambridge, Massachusetts, led an extensive analysis of the genetic makeup of Ebola samples from patients living in affected regions. Joined by an international team of scientists, Dr. Sabeti used advanced technology to analyze the genetics of the Ebola samples extremely rapidly and with high levels of accuracy. Using this technology, the researchers pinpointed a single late 2013 introduction from an unspecified animal reservoir into humans.

Their study showed that the strain responsible for the West African outbreak separated from a closely related strain found in Central Africa as early as 2004, indicating movement from Central to West Africa over the span of a decade. Studying RNA changes occurring over the span of the outbreak suggests that the first human infection of the outbreak was followed by exclusive human to human transmissions.

While analyzing the genetic makeup of the Ebola samples, Dr. Sabeti and colleagues discovered a number of mutations that arose as the outbreak spread. Some of these mutations, termed nonsynonymous mutations, alter the biological state of the virus and may allow it to continually and rapidly adapt to human immune defenses as the outbreak continues. This feature points to the need for improved methods that will allow for close monitoring of changes in the viral genome and the impact on vaccine targets. Such monitoring, called genomic surveillance, can provide important insights into the biology of how the Ebola virus spreads and evolves. It may also allow scientists to develop improved methods to detect infection, and point the way to new and improved drug and vaccines.

Dr. Sabeti’s New Innovator Award is designed to support exceptionally creative new investigators conducting innovative and high-impact research, as part of the NIH Common Fund’s High-Risk, High-Reward program. The original focus of her research was on Lassa fever, a related but distinct hemorrhagic disease. When the Ebola outbreak began, she shifted her research focus to address this pressing challenge.

“Dr. Sabeti’s New Innovator Award provided flexibility to quickly adjust her research when the 2014 Ebola outbreak began,” said James M. Anderson M.D., Ph.D. director of the Division of Program Coordination, Planning and Strategic Initiatives at NIH.

“This exemplifies how the High-Risk, High- Reward program allows researchers to tackle the most challenging and urgent scientific questions.”

The NIH Common Fund supports a series of exceptionally high impact research programs that are broadly relevant to health and disease. Common Fund programs are designed to overcome major research barriers and pursue emerging opportunities for the benefit of the biomedical research community at large. The research products of the Common Fund programs are expected to catalyze disease-specific research supported by the NIH Institutes and Centers. 

About the National Institutes of Health (NIH): NIH, the nation’s medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases.

NIH…Turning Discovery Into Health®

MEN VIEWED MORE FAVORABLY THAN WOMEN WHEN SEEKING WORK-LIFE BALANCE

From FMS Global News Desk of Jeanne Hambleton Embargoed: 18-Aug-2014
Source : American Sociological Association (ASA) Citations American Sociological Association Annual Meeting, Aug-2014   By Sydney McKinley, ASA Public Information Office.

 

Newswise — SAN FRANCISCO — While some suggest that flexible work arrangements have the potential to reduce workplace inequality, a new study finds these arrangements may exacerbate discrimination based on parental status and gender.

Study author Christin Munsch, an assistant professor of sociology at Furman University, analyzed the reactions both men and women received when making flexible work requests — meaning that they either asked to work from home or to work non-traditional hours.

Among those who made flexible work requests, men who asked to work from home two days a week in order to care for a child were significantly advantaged compared to women who made the same request. Munsch, who will present her research at the 109th Annual Meeting of the American Sociological Association, also found that both men and women who made flexible work requests for childcare related reasons were advantaged compared to those who made the same requests for other reasons.

For her study, Munsch used a sample of 646 people who ranged in age from 18 to 65 and resided in the United States. Participants were shown a transcript and told it was an actual conversation between a human resources representative and an employee. The employee either requested a flexible work arrangement or did not. Among those who requested a flexible work arrangement, the employee either asked to come in early and leave early three days a week, or asked to work from home two days a week. Munsch also varied the gender of the employee and the reason for the request (involving childcare or not). After reading their transcript, participants were asked how likely they would be to grant the request and also to evaluate the employee on several measures, including how likeable, committed, dependable, and dedicated they found him or her.

Among those who read the scenario in which a man requested to work from home for childcare related reasons, 69.7 percent said they would be “likely” or “very likely” to approve the request, compared to 56.7 percent of those who read the scenario in which a woman made the request. Almost a quarter — 24.3 percent — found the man to be “extremely likeable,” compared to only 3 percent who found the woman to be “extremely likeable.” And, only 2.7 percent found the man “not at all” or “not very” committed, yet 15.5 percent found the woman “not at all” or “not very” committed.

“These results demonstrate how cultural notions of parenting influence perceptions of people who request flexible work,” Munsch said.

“Today, we think of women’s responsibilities as including paid labor and domestic obligations, but we still regard breadwinning as men’s primary responsibility and we feel grateful if men contribute in the realm of childcare or to other household tasks.”

Munsch fears that this will be an issue as marriages become more egalitarian.

“For example, in an arrangement where both partners contribute equally at home and in terms of paid labor — men, but not women, would reap workplace advantages,” she said. “In this situation, a move towards gender equality at home would perpetuate gender inequality in the workplace.”

Regarding the findings on those who made flexible work requests for childcare versus non-childcare related reasons, Munsch said that “both men and women who requested to work from home or to work atypical hours to take care of a child were viewed as more respectable, likable, committed, and worthy of a promotion, and their requests were more supported than those who requested flexible work for reasons unrelated to childcare.”

For example, among those who read a scenario in which an employee asked to work from home two days a week for childcare related reasons, 63.5 percent of the respondents said they would be “likely” or “very likely” to grant the request. However, only 40.7 percent of those who read a scenario in which an employee asked to work from home two days a week to reduce his or her commute time and carbon footprint said they would be “likely” or “very likely” to grant the request.

According to Munsch, these findings surprised her.

“I was surprised because so much of the research talks about how parents — and mothers in particular — are discriminated against compared to their childless counterparts,” she said. “When it comes to flexible work, it seems that engaging in childcare is seen as a more legitimate reason than other, non-childcare related reasons, like training for an endurance event or wanting to reduce your carbon footprint.”

While feminists and work-family scholars have championed flexible work options as a way to promote gender equality and as a remedy for work-family conflict, Munsch said that her research “shows that we should be hesitant in assuming this is effective.”

Still, Munsch does not believe employers should eliminate flexible work arrangements, but rather they should be cognizant of their biases and the ways in which they “differentially assess people who use these policies, so as not to perpetuate inequality.”

About the American Sociological Association
The American Sociological Association (www.asanet.org), founded in 1905, is a non-profit membership association dedicated to serving sociologists in their work, advancing sociology as a science and profession, and promoting the contributions to and use of sociology by society.

The paper, “Flexible Work, Flexible Penalties: The Effect of Gender, Childcare, and Type of Request on the Flexibility Bias,” was presented on Monday, Aug. 18 in San Francisco at the American Sociological Association’s 109th Annual Meeting.

 

ROBOT FOLDS ITSELF UP AND WALKS AWAY

Demonstrates the potential for sophisticated machines that build themselves

From The FMS Global News Desk of Jeanne Hambleton   Wyss Institute

August 7, 2014 (BOSTON) — A team of engineers used little more than paper and Shrinky dinks™ — the classic children’s toy that shrinks when heated — to build a robot that assembles itself into a complex shape in four minutes flat, and crawls away without any human intervention. The advance, described in Science, demonstrates the potential to quickly and cheaply build sophisticated machines that interact with the environment, and to automate much of the design and assembly process. The method draws inspiration from self-assembly in nature, such as the way linear sequences of amino acids fold into complex proteins with sophisticated functions.

“Getting a robot to assemble itself autonomously and actually perform a function has been a milestone we have been chasing for many years,” said senior author Rob Wood, Ph.D., a Core Faculty member at the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Charles River Professor of Engineering and Applied Sciences at Harvard’s School of Engineering and Applied Sciences (SEAS). The team included engineers and computer scientists from the Wyss Institute, SEAS, and the Massachusetts Institute of Technology (MIT).

In addition to expanding the scope of ways one can manufacture robots in general, the advance harbors potential for rather exotic applications as well.

“Imagine a ream of dozens of robotic satellites sandwiched together so that they could be sent up to space and then assemble themselves remotely once they get there—they could take images, collect data, and more,” said lead author Sam Felton, who is pursuing his Ph.D. at SEAS.

The robots are the culmination of a series of advances made by the team over the last few years, including development of a printed robotic inchworm — which still required human involvement while folding itself — and a self-folding lamp that had to be turned on by a person after it self-assembled.

The new robot is the first that builds itself and performs a function without human intervention.

“Here we created a full electromechanical system that was embedded into one flat sheet,” Felton said. The team used computer design tools to inform the optimal design and fold pattern — and after about 40 prototypes, Felton honed in on the one that could fold itself up and walk away. He fabricated the sheet using a solid ink printer, a laser machine, and his hands.

In this video Wyss Institute Core Faculty member Rob Wood, who is also the Charles River Professor of Engineering and Applied Sciences at Harvard’s School of Engineering and Applied Sciences (SEAS), and SEAS Ph.D. student Sam Felton discuss their landmark achievement in robotics — getting a robot to assemble itself and walk away autonomously — as well as their vision for the future of robots that can be manufactured easily and inexpensively.

The refined design only took about two hours to assemble using a method that relies upon the power of origami, the ancient Japanese art whereby a single sheet of paper can be folded into complex structures. The origami-inspired approach enabled the team to avoid the traditional “nuts and bolts” approach to assembling complex machines.

They started with a flat sheet, to which they added two motors, two batteries, and a microcontroller —which acts like the robot’s “brain,” Felton said.

The sheet was a composite of paper and Shrinky dinks™, which is also called polystyrene — and a single flexible circuit board in the middle. It also included hinges that were programmed to fold at specific angles. Each hinge contained embedded circuits that produce heat on command from the microcontroller. The heat triggers the composite to self-fold in a series of steps.

When the hinges cool after about four minutes, the polystyrene hardens — making the robot stiff — and the microncontroller then signals the robot to crawl away at a speed of about one-tenth of a mile per hour. The entire event consumed about the same amount of energy in one AA alkaline battery.

The current robot operates on a timer, waiting about ten seconds after the batteries are installed to begin folding. However, “we could easily modify this such that the folding is triggered by an environmental sensor, such as temperature or pressure,” Felton said.

One of the primary challenges in the process, Felton said, was the propensity for the robots to burn up before they folded up properly; each one runs on about ten times the current that typically runs through a light bulb.

“There is a great deal that we can improve based on this foundational step,” said Felton, who plans to experiment with different kinds of shape memory polymers — materials like the polystyrene — that are stronger and require less heat to activate, for example.

The method is complementary to 3D printing, which also holds great promise for quickly and inexpensively manufacturing robotic components but struggles to integrate the electrical components and in this specific case, would have taken a lot longer to produce the functional prototype.

The long-term dream of this work, Wood said, is to have a facility that everyone could access around the clock in their communities when they might have a need for robotic assistance, from everyday house and porch sweeping to detecting gas leaks in the neighborhood. “You would be able to come in, describe what you need in fairly basic terms, and come back an hour later to get your robotic helper,” Wood said. All told, each robot cost about $100, but only $20 for the body without the motors, batteries, and microcontroller.

“This achievement by Rob and his team changes the way we think about manufacturing in that the machine fabricates itself,” said Wyss Institute Founding Director Don Ingber, M.D., Ph.D. “The days of big, rigid, robots that sit in place and carry out the same repetitive task day in and out are fading fast.”

This work was funded by the National Science Foundation, the Wyss Institute for Biologically Inspired Research at Harvard University, and the Department of Defense, Air Force Office of Scientific Research, National Defense Science and Engineering Graduate (NDSEG) Fellowship.

About the Wyss Institute for Biologically Inspired Engineering at Harvard University
The Wyss Institute for Biologically Inspired Engineering at Harvard University (http://wyss.harvard.edu) uses Nature’s design principles to develop bioinspired materials and devices that will transform medicine and create a more sustainable world. Working as an alliance among all of Harvard’s Schools, and in partnership with Beth Israel Deaconess Medical Center, Brigham and Women’s Hospital, Boston Children’s Hospital, Dana Farber Cancer Institute, Massachusetts General Hospital, the University of Massachusetts Medical School, Spaulding Rehabilitation Hospital, Boston University, Tufts University, and Charité – Universitätsmedizin Berlin and the University of Zurich, the Institute crosses disciplinary and institutional barriers to engage in high-risk research that leads to transformative technological breakthroughs. By emulating Nature’s principles for self-organizing and self-regulating, Wyss researchers are developing innovative new engineering solutions for healthcare, energy, architecture, robotics, and manufacturing. These technologies are translated into commercial products and therapies through collaborations with clinical investigators, corporate alliances, and new start-ups.

About the Harvard School of Engineering and Applied Sciences
The Harvard School of Engineering and Applied Sciences (SEAS) serves as the connector and integrator of Harvard’s teaching and research efforts in engineering, applied sciences, and technology. Through collaboration with researchers from all parts of Harvard, other universities, and corporate and foundational partners, we bring discovery and innovation directly to bear on improving human life and society.

Wow – what next? Back tomorrow. Jeanne

Link | Posted on by | Tagged , , , , , , , , , , , , , , , , , , , | Leave a comment

NASA COMPLETES KEY REVIEW OF WORLD’S MOST POWERFUL ROCKET IN SUPPORT OF JOURNEY TO MARS

From the FMS Global News Desk of Jeanne Hambleton August 27 2014  NASA Gov. Missions

 

14-229_0Artist concept of NASA’s Space Launch System (SLS) 70-metric-ton configuration launching to space. SLS will be the most powerful rocket ever built for deep space missions, including to an asteroid and ultimately to Mars. Image Credit: NASA/MSFC

NASA officials Wednesday announced they have completed a rigorous review of the Space Launch System (SLS) — the heavy-lift, exploration class rocket under development to take humans beyond Earth orbit and to Mars — and approved the program’s progression from formulation to development, something no other exploration class vehicle has achieved since the agency built the space shuttle.

“We are on a journey of scientific and human exploration that leads to Mars,” said NASA Administrator Charles Bolden. “And we are firmly committed to building the launch vehicle and other supporting systems that will take us on that journey.”

For its first flight test, SLS will be configured for a 70-metric-ton (77-ton) lift capacity and carry an uncrewed Orion spacecraft beyond low-Earth orbit. In its most powerful configuration, SLS will provide an unprecedented lift capability of 130 metric tons (143 tons), which will enable missions even farther into our solar system, including such destinations as an asteroid and Mars.

14-229b

This artist concept shows NASA’s Space Launch System, or SLS, rolling to a launchpad at Kennedy Space Center at night. SLS will be the most powerful rocket in history, and the flexible, evolvable design of this advanced, heavy-lift launch vehicle will meet a variety of crew and cargo mission needs.  Image Credit: NASA/MSFC

This decision comes after a thorough review known as Key Decision Point C (KDP-C), which provides a development cost baseline for the 70-metric ton version of the SLS of $7.021 billion from February 2014 through the first launch and a launch readiness schedule based on an initial SLS flight no later than November 2018.

Conservative cost and schedule commitments outlined in the KDP-C align the SLS program with program management best practices that account for potential technical risks and budgetary uncertainty beyond the program’s control.

“Our nation is embarked on an ambitious space exploration program, and we owe it to the American taxpayers to get it right,” said Associate Administrator Robert Lightfoot, who oversaw the review process.

“After rigorous review, we are committing today to a funding level and readiness date that will keep us on track to sending humans to Mars in the 2030s – and we are going to stand behind that commitment.”

“The Space Launch System Program has done exemplary work during the past three years to get us to this point,” said William Gerstenmaier, associate administrator for the Human Explorations and Operations Mission Directorate at NASA Headquarters in Washington.

“We will keep the teams working toward a more ambitious readiness date, but will be ready no later than November 2018.”

The SLS, Orion, and Ground Systems Development and Operations programs each conduct a design review prior to each program’s respective KDP-C, and each program will establish cost and schedule commitments that account for its individual technical requirements.

“We are keeping each part of the program — the rocket, ground systems, and Orion — moving at its best possible speed toward the first integrated test launch,” said Bill Hill, director Exploration Systems Development at NASA.

“We are on a solid path toward an integrated mission and making progress in all three programs every day.”

“Engineers have made significant technical progress on the rocket and have produced hardware for all elements of the SLS program,” said SLS program manager Todd May.

“The team members deserve an enormous amount of credit for their dedication to building this national asset.”

The program delivered in April the first piece of flight hardware for Orion’s maiden flight, Exploration Flight Test-1 targeted for December. This stage adapter is of the same design that will be used on SLS’s first flight, Exploration Mission-1.

Michoud Assembly Facility in New Orleans has all major tools installed and is producing hardware, including the first pieces of flight hardware for SLS. Sixteen RS-25 engines, enough for four flights, currently are in inventory at Stennis Space Center, in Bay St. Louis, Mississippi, where an engine is already installed and ready for testing this fall. NASA contractor ATK has conducted successful test firings of the five-segment solid rocket boosters and is preparing for the first qualification motor test.

SLS will be the world’s most capable rocket. In addition to opening new frontiers for explorers traveling aboard the Orion capsule, the SLS may also offer benefits for science missions that require it’s use and cannot be flown on commercial rockets.

The next phase of development for SLS is the Critical Design Review, a programmatic gate that reaffirms the agency’s confidence in the program planning and technical risk posture.

 

NASA TELESCOPES UNCOVER EARLY CONSTRUCTION OF GIANT GALAXY

From the FMS Global News Desk of Jeanne Hambleton Posted August 27, 2014.            NASA GOV.

New galaxy14-230_0

 Artist impression of a firestorm of star birth deep inside core of young, growing elliptical galaxy. Image Credit: NASA, Z. Levay, G. Bacon (STScI)

Astronomers have for the first time caught a glimpse of the earliest stages of massive galaxy construction. The building site, dubbed “Sparky,” is a dense galactic core blazing with the light of millions of newborn stars that are forming at a ferocious rate.

The discovery was made possible through combined observations from NASA’s Hubble and Spitzer space telescopes, the W.M. Keck Observatory in Mauna Kea, Hawaii, and the European Space Agency’s Herschel space observatory, in which NASA plays an important role.

A fully developed elliptical galaxy is a gas-deficient gathering of ancient stars theorized to develop from the inside out, with a compact core marking its beginnings. Because the galactic core is so far away, the light of the forming galaxy that is observable from Earth was actually created 11 billion years ago, just 3 billion years after the Big Bang.

Although only a fraction of the size of the Milky Way, the tiny powerhouse galactic core already contains about twice as many stars as our own galaxy, all crammed into a region only 6,000 light-years across. The Milky Way is about 100,000 light-years across.

“We really had not seen a formation process that could create things that are this dense,” explained Erica Nelson of Yale University in New Haven, Connecticut, lead author of the study.

“We suspect that this core-formation process is a phenomenon unique to the early universe because the early universe, as a whole, was more compact. Today, the universe is so diffuse that it cannot create such objects anymore.”

In addition to determining the galaxy’s size from the Hubble images, the team dug into archival far-infrared images from Spitzer and Herschel. This allowed them to see how fast the galaxy core is creating stars. Sparky produced roughly 300 stars per year, compared to the 10 stars per year produced by our Milky Way.

“They are very extreme environments,” Nelson said. “It is like a medieval cauldron forging stars. There is a lot of turbulence, and it is bubbling. If you were in there, the night sky would be bright with young stars, and there would be a lot of dust, gas, and remnants of exploding stars. To actually see this happening is fascinating.”

Astronomers theorize that this frenzied star birth was sparked by a torrent of gas flowing into the galaxy’s core while it formed deep inside a gravitational well of dark matter, invisible cosmic material that acts as the scaffolding of the universe for galaxy construction.

Observations indicate that the galaxy had been furiously making stars for more than a billion years. It is likely that this frenzy eventually will slow to a stop, and that over the next 10 billion years other smaller galaxies may merge with Sparky, causing it to expand and become a mammoth, sedate elliptical galaxy.

“I think our discovery settles the question of whether this mode of building galaxies actually happened or not,” said team-member Pieter van Dokkum of Yale University.

“The question now is, how often did this occur? We suspect there are other galaxies like this that are even fainter in near-infrared wavelengths. We think they will be brighter at longer wavelengths, and so it will really be up to future infrared telescopes such as NASA’s James Webb Space Telescope to find more of these objects.”

The paper appears in the Aug. 27 issue of the journal Nature.

The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA’s Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope. The Space Telescope Science Institute (STScI) in Baltimore conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington.

NASA’s Jet Propulsion Laboratory, Pasadena, California, manages the Spitzer Space Telescope mission for NASA’s Science Mission Directorate in Washington. Science operations are conducted at the Spitzer Science Center at the California Institute of Technology in Pasadena. Spacecraft operations are based at Lockheed Martin Space Systems Company, Littleton, Colorado. Data are archived at the Infrared Science Archive housed at the Infrared Processing and Analysis Center at Caltech. Caltech manages JPL for NASA.

 

ETA CARINAE: OUR NEIGHBORING SUPERSTARS

From the FMS Global News Desk of Jeanne Hambleton NASA GOV.Chandra X-Ray Observatory

etacar

 

The Eta Carinae star system does not lack for superlatives. Not only does it contain one of the biggest and brightest stars in our galaxy, weighing at least 90 times the mass of the sun, it is also extremely volatile and is expected to have at least one supernova explosion in the future.

As one of the first objects observed by NASA’s Chandra X-ray Observatory after it’s launch some 15 years ago, this double star system continues to reveal new clues about its nature through the X-rays it generates.

Astronomers reported extremely volatile behavior from Eta Carinae in the 19th century, when it became very bright for two decades, outshining nearly every star in the entire sky. This event became known as the “Great Eruption.”

Data from modern telescopes reveal that Eta Carinae threw off about ten times the sun’s mass during that time. Surprisingly, the star survived this tumultuous expulsion of material, adding “extremely hardy” to its list of attributes.

Today, astronomers are trying to learn more about the two stars in the Eta Carinae system and how they interact with each other. The heavier of the two stars is quickly losing mass through  wind streaming away from its surface at over a million miles per hour. While not the giant purge of the Great Eruption, this star is still losing mass at a very high rate that will add up to the sun’s mass in about a millennium.

Though smaller than its partner, the companion star in Eta Carinae is also massive, weighing in at about 30 times the mass of the sun. It is losing matter at a rate that is about a hundred times lower than its partner, but still a prodigious weight loss compared to most other stars. The companion star beats the bigger star in wind speed, with its wind clocking in almost ten times faster.

When these two speedy and powerful winds collide, they form a bow shock – similar to the sonic boom from a supersonic airplane – that then heats the gas between the stars. The temperature of the gas reaches about ten million degrees, producing X-rays that Chandra detects.

The Chandra image of Eta Carinae shows low energy X-rays in red, medium energy X-rays in green, and high energy X-rays in blue. Most of the emission comes from low and high energy X-rays. The blue point source is generated by the colliding winds, and the diffuse blue emission is produced when the material that was purged during the Great Eruption reflects these X-rays. The low energy X-rays further out show where the winds from the two stars, or perhaps material from the Great Eruption, are striking surrounding material. This surrounding material might consist of gas that was ejected before the Great Eruption.

An interesting feature of the Eta Carinae system is that the two stars travel around each other along highly elliptical paths during their five-and-a-half-year long orbit. Depending on where each star is on its oval-shaped trajectory, the distance between the two stars changes by a factor of twenty. These oval-shaped trajectories give astronomers a chance to study what happens to the winds from these stars when they collide at different distances from one another.

Throughout most of the system’s orbit, the X-rays are stronger at the apex, the region where the winds collide head-on. However, when the two stars are at their closest during their orbit (a point that astronomers call “periastron”), the X-ray emission dips unexpectedly.

To understand the cause of this dip, astronomers observed Eta Carinae with Chandra at periastron in early 2009. The results provided the first detailed picture of X-ray emission from the colliding winds in Eta Carinae. The study suggests that part of the reason for the dip at periastron is that X-rays from the apex are blocked by the dense wind from the more massive star in Eta Carinae, or perhaps by the surface of the star itself.

Another factor responsible for the X-ray dip is that the shock wave appears to be disrupted near periastron, possibly because of faster cooling of the gas due to increased density, and/or a decrease in the strength of the companion star’s wind because of extra ultraviolet radiation from the massive star reaching it. Researchers are hoping that Chandra observations of the latest periastron in August 2014 will help them determine the true explanation.

These results were published in the April 1, 2014 issue of The Astrophysical Journal and are available online. The first author of the paper is Kenji Hamaguchi of Goddard Space Flight Center in Greenbelt, MD, and his co-authors are Michael Corcoran of Goddard Space Flight Center (GSFC); Christopher Russell of University of Delaware in Newark, DE; A. Pollock from the European Space Agency in Madrid, Spain; Theodore Gull, Mairan Teodoro, and Thomas I. Madura from GSFC; Augusto Damineli from Universidade de Sao Paulo in Sao Paulo, Brazil and Julian Pittard from the University of Leeds in the UK.

NASA’s Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA’s Science Mission Directorate in Washington, DC. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra’s science and flight operations.

 

MY COMMENTS

Typewriter copy

Hope you enjoy today’s pictures now I have mastered that.  I am  however, seriously considering moving all of the space information to an additional blog which I will update regularly except when I am on holiday in  the sun in the not too distant future. I will advise when I go AWOL.  I am receiving more information than I can publish and this is allegedly a FMS (Fibromyalgia) Global  News site.  Sadly I am very conscious  those readers are missing out and I am quite likely to get ticked off. If there  is a  big enough following  for space articles I will move. Please send me your comments before I  make the decision as obviously it is more work. Back tomorrow Jeanne

 

 

 

 

 

 

 

 

 

Link | Posted on by | Tagged , , , , , , , , , , | Leave a comment

POTENTIAL THERAPY FOR THE SUDAN STRAIN OF EBOLA COULD HELP CONTAIN SOME FUTURE OUTBREAK

From the FMS Global News Desk of Jeanne Hambleton August 27 2014   American Chemical Society ACS Chemical Biology

Scientists report a new potential weapon in the fight against a strain of ebola (above) that is just as deadly as the one currently devastating West Africa.

Ebola is a rare, but deadly disease that exists as five strains, none of which have approved therapies. One of the most lethal strains is the Sudan ebolavirus (SUDV). Although not the strain currently devastating West Africa, SUDV has caused widespread illness, even as recently as 2012.

In a new study appearing in the journal ACS Chemical Biology, researchers now report a possible therapy that could someday help treat patients infected with SUDV.

John Dye, Sachdev Sidhu, Jonathan Lai and colleagues explain that about 50-90 percent of ebola patients die after experiencing the typical symptoms of the disease, which include fever, muscle aches, vomiting and bleeding. Of the five known ebolaviruses, the Zaire (EBOV) and SUDV strains are the most deadly and cause the most recurring outbreaks.

Many studies have focused on EBOV, the culprit of the current epidemic, but much less attention has been placed on SUDV until now. To develop a therapy for SUDV, this research team turned to an antibody that Dye’s group previously reported.

The team’s antibody was directed against SUDV and was made in mice. But the human immune system could potentially recognize that antibody as foreign and ultimately get rid of it, preventing the antibody from treating the disease. To avoid this situation, they wanted to make a “humanized” version of the antibody.

In the newly published work, the team put the ebola-specific part of the mouse antibody onto a human antibody scaffold and made some changes to this molecule. They identified two versions that were able to fend off SUDV in laboratory tests on cells and in specially bred mice.

“These antibodies represent strong immunotherapeutic candidates for the treatment of SUDV infection,” say the researchers.

This research, however, is not expected to help with the current ebola outbreak that, as of mid-August, has killed at least 1,200 people. That is because antibodies that kill off one strain of the virus have not worked against other strains.

The U.S. Food and Drug Administration — which has not yet approved any ebola therapies — did allow two U.S. aid workers infected during the current outbreak to be treated with an experimental drug, which is a cocktail of antibodies specifically targeting EBOV.

NEW STUDY THROWS INTO QUESTION LONG-HELD BELIEF ABOUT DEPRESSION

From the FMS Global News Desk of Jeanne Hambleton 27 August 2014                                                  ACS Chemical Neuroscience American Chemical Society

New evidence puts into doubt the long-standing belief that a deficiency in serotonin — a chemical messenger in the brain — plays a central role in depression. In the journal ACS Chemical Neuroscience, scientists report that mice lacking the ability to make serotonin in their brains (and thus should have been “depressed” by conventional wisdom) did not show depression-like symptoms.

Donald Kuhn and colleagues at the John D. Dingell VA Medical Center and Wayne State University School of Medicine note that depression poses a major public health problem. More than 350 million people suffer from it, according to the World Health Organization, and it is the leading cause of disability across the globe.

In the late 1980s, the now well-known antidepressant Prozac was introduced. The drug works mainly by increasing the amounts of one substance in the brain — serotonin. So scientists came to believe that boosting levels of the signaling molecule was the key to solving depression. Based on this idea, many other drugs to treat the condition entered the picture. But now researchers know that 60 to 70 percent of these patients continue to feel depressed, even while taking the drugs. Kuhn’s team set out to study what role, if any, serotonin played in the condition.

To do this, they developed “knockout” mice that lacked the ability to produce serotonin in their brains. The scientists ran a battery of behavioral tests. Interestingly, the mice were compulsive and extremely aggressive, but did not show signs of depression-like symptoms. Another surprising finding is that when put under stress, the knockout mice behaved in the same way most of the normal mice did. Also, a subset of the knockout mice responded therapeutically to antidepressant medications in a similar manner to the normal mice. These findings further suggest that serotonin is not a major player in the condition, and different factors must be involved. These results could dramatically alter how the search for new antidepressants moves forward in the future, the researchers conclude.

The authors acknowledge funding from the Department of Veterans Affairs and the Department of Psychiatry and Behavioral Neurosciences at Wayne State University.

 

WHEN WILL I DIE? HOW I DECIDED WHETHER TO TEST FOR EARLY-ONSET ALZHEIMER’S

From the FMS Global News Desk of Jeanne Hambleton Aug. 19, 2014                                                     Stone Hearth Newsletter – Time Magazine By Matthew Thomas

 

People ask me all the time if I want to find out how and when I am going to die. But that is not exactly how they ask it. What they ask is whether I am going to get tested for the gene associated with early-onset Alzheimer’s disease. It is hard, though, to miss the subtext in the question: How morbidly curious are you? How much terror can you withstand?

I do not blame them. These friends know I am 39 and that my father started showing symptoms of Alzheimer’s in his early fifties (and possibly earlier). They know that after a handful of difficult years my father was diagnosed when I was a freshman in college and that he died less than a decade later. They wonder if I am going to take advantage of the remarkable opportunity science affords us to uncover our genetic destinies and plan accordingly.

Modern life is all about making us forget we are capable of dying. We love to feel in control of our mortality, even if we understand that that control is only an illusion. Alzheimer’s disease is the opposite of modern life. It is the ascendancy of entropy and chaos.

My father’s disease had a devastating effect on our family. It did not just take away our time with him and his with us. It also took away his time with the not yet conceived children who would populate the family in his absence. He would have been in his 70s now, surrounded by three grandchildren through my sister and two through my wife and me. It is painful to know what a resource he would have been for them and how much they have lost. He will live, faintly grasped, if at all, only in stories.

When he was still living, we tried to make the best of the situation. When my sister got married, my mother brought my father’s tux to the nursing home and had the staff dress him in it. After the ceremony, while everyone else headed to the reception, two limos carrying my immediate family took a detour to the nursing home for photos.

When I look at the framed shot of us huddled around my father in his wheelchair, I see how hard my sister is trying to keep her emotions in. She is smiling big, but tears are streaming down her face. We are all smiling hard, though there is no driving off the pain and awkwardness of the moment. Everyone’s looking at the camera except my father, who is gazing vacantly the other way, his mouth hanging open. Moments later we drove to the reception, leaving him behind, feeling terrible for doing so. I wanted him not to understand a thing that was happening in that scene, but you never knew what he knew.

For most of my youth, my father seemed to know everything. A universe of information swirled around in his brain. I could hardly put a question to him that he could not answer. The rare times he came up short, he pulled me into his study, took a book off the shelf, lay it on the desk and stood flipping through it with me. I think sometimes he pretended not to know things just so that we could look them up together.

Once, when I was about 10 and my sister about 14, we were walking with my father on the outskirts of his old neighborhood. He stopped in front of a town house and told us Winston Churchill’s mother was born there.

“The iconic English statesman of the century!” he said. “A mother from Brooklyn!” He gave us a look almost wild with the significance of what he was about to say. “The wit!” he said. “The chutzpah! That was the Brooklyn in him!”

Three decades later, I can still remember the moment, bathed in that ethereal light that we reserve for our happiest memories. Why do I remember it, though? How did such a quotidian moment burrow its way into my consciousness and survive? Was it the juxtaposition of incongruous worlds, England and Brooklyn? I don’t think so. I think it was the joy my father took in sharing his knowledge with us.

My father would have loved my twin children. They are 3 years old and full of vitality and personality. My son is unusually strong for such a skinny kid, and remarkably agile. He climbs whatever is available, with a monkey’s speed. When he sits at the piano and pounds the keys, it sounds as if he is playing a real song. My daughter is a sensitive cuddler who remembers everything.

“Daddy, is this from the hotel we stayed at?” she asked the other day, handing me a pad from a Marriott where we stayed six months ago.

Recently my daughter came into our bed in the early morning, lying between my wife and me, and started in on iguanas.

“Iguanas are baby alligators,” she said, and I chuckled at the powers of observation of a developing mind. “Can iguanas learn to open doors?” she asked, and after I offered the opinion that they could not, I pulled her close, gave her kisses and began to choke up.

Maybe when my twins are older, science will have caught up to this disease. We have the best scientific minds working on the problem of Alzheimer’s.

Much like the search for the cure for cancer, there is a massive payout at the end of the rainbow for anyone who comes up with a solution. If there is anything to put one’s faith in in the health care system, it is that the confluence of genius and capital will, in this case, produce the outcome if the outcome is producible. And I do believe it is producible. But if it is not produced in time, no amount of awareness of my fate, if it is to be my fate, is going to forestall its unfolding on me.

My wife and I have little battles over my forgetfulness. She asked me to fix the kink in the hose that runs from the humidifier in our basement to the French drain. A few days later, she gave up and fixed it herself. We had a grill delivered for our backyard, and the flame kept going out on it as soon as we lit it. I was supposed to call about it the next morning, but I would more or less forgotten that we had bought a grill in the first place when I heard my wife on the phone with the store. These are not terrifying signs in themselves — everyone is a little forgetful occasionally — but they make me pause enough to wonder if the worst is coming.

I am built like my father, I sound like him, and if I have a genetic mutation in one of three genes that are all variations of the apolipoprotein E gene, then I will likely develop early-onset Alzheimer’s like him. These genes are rare, accounting for only 1% to 5% of all Alzheimer’s cases. But if I inherited the mutation from my father, then I will probably get the disease.

My grandfather — my father’s father — died relatively young of other causes, so there’s no saying whether he would have gotten early-onset Alzheimer’s. No one else in the family had it that we know of. I have as good a chance of getting familial Alzheimer’s as I have of avoiding it. Genetic testing would settle the question for good.

But what would I gain by knowing I was getting Alzheimer’s? I would not gain another day with my family. I would not gain a leg up on planning. My wife and I have taken care of practical considerations. We have wills. My wife has a durable power of attorney that enables her to make decisions on my behalf. Every policy, every asset, is in both our names. We opened college savings accounts for the kids. I am working hard on my next book. How much more could I prepare?

After some deliberation, I have decided not to get genetic testing done. Instead, I am going to try to live every day as if I know that I am dying. The fact is, we are all dying. If I try to wring the most I can out of every moment, if I set aside time every day that my wife and I keep as inviolate as possible, if I give my wife and children quality interactions whenever we are in the same room, if I leave the smartphone on the counter and realize there is no information more important than the information I get in my interactions with my loved ones, then how different is any of that from what I would do if I knew I was getting Alzheimer’s?

Scientific studies suggest that my children are at just the age when they can begin to form lasting memories of their experiences. If I am aware that I am going to be gone someday and I consider it possible that that day will come far sooner than I would like, then I want them to grow up not only knowing their father well but also knowing that they are well loved. I want to get in better shape for them, because I would like them to see what a truly vital father looks like. And I have decided to read to them whenever they ask, if I possibly can. I do not have any memory of my father telling me, “No more books” at bedtime. I will forever picture him with an arm around me, holding a book out before me, showing me the world.

Thomas is the author of the debut novel,’We Are Not Ourselves’, out now.

 

Back tomorrow Jeanne

 

 

 

 

 

 

Link | Posted on by | Tagged , , , , , , , , , , , , | Leave a comment

BEYOND DNA: EPIGENETICS PLAYS LARGE ROLE IN BLOOD FORMATION

From FMS Global News Desk of Jeanne Hambleton Released: 11-Aug-2014
Source: Weizmann Institute of ScienceScience, August 7, 2014

Newswise — Blood stem cells have the potential to turn into any type of blood cell, whether it be the oxygen-carrying red blood cells, or the immune system’s many types of white blood cells that help fight infection. How exactly is the fate of these stem cells regulated? Preliminary findings from research conducted by scientists from the Weizmann Institute of Science and the Hebrew University are starting to reshape the conventional understanding of the way blood stem cell fate decisions are controlled, thanks to a new technique for epigenetic analysis they have developed. Understanding epigenetic mechanisms (environmental influences other than genetics) of cell fate could lead to the deciphering of the molecular mechanisms of many diseases, including immunological disorders, anemia, leukemia, and many more. It also lends strong support to findings that environmental factors and lifestyle play a more prominent role in shaping our destiny than previously realized.

The process of differentiation – in which a stem cell becomes a specialized mature cell – is controlled by a cascade of events in which specific genes are turned “on” and “off” in a highly regulated and accurate order. The instructions for this process are contained within the DNA itself in short regulatory sequences. These regulatory regions are normally in a “closed” state, masked by special proteins called histones to ensure against unwarranted activation. Therefore, to access and “activate” the instructions, this DNA mask needs to be “opened” by epigenetic modifications of the histones so it can be read by the necessary machinery.

In a paper published in Science, Dr. Ido Amit and David Lara-Astiaso of the Weizmann Institute’s Department of Immunology, along with Prof. Nir Friedman and Assaf Weiner of the Hebrew University of Jerusalem, charted – for the first time – histone dynamics during blood development. Thanks to the new technique for epigenetic profiling they developed, in which just a handful of cells – as few as 500 – can be sampled and analyzed accurately, they have identified the exact DNA sequences, as well as the various regulatory proteins, that are involved in regulating the process of blood stem cell fate.

Their research has also yielded unexpected results: As many as 50% of these regulatory sequences are established and opened during intermediate stages of cell development. This means that epigenetics is active at stages in which it had been thought that cell destiny was already set. “This changes our whole understanding of the process of blood stem cell fate decisions,” says Lara-Astiaso, “suggesting that the process is more dynamic and flexible than previously thought.”

Although this research was conducted on mouse blood stem cells, the scientists believe that the mechanism may hold true for other types of cells. “This research creates a lot of excitement in the field, as it sets the groundwork to study these regulatory elements in humans,” says Weiner.

Discovering the exact regulatory DNA sequence controlling stem cell fate, as well as understanding its mechanism, holds promise for the future development of diagnostic tools, personalized medicine, potential therapeutic and nutritional interventions, and perhaps even regenerative medicine, in which committed cells could be reprogrammed to their full stem cell potential.

Dr. Ido Amit’s research is supported by the M.D. Moross Institute for Cancer Research; the J&R Center for Scientific Research; the Jeanne and Joseph Nissim Foundation for Life Sciences Research; the Abramson Family Center for Young Scientists; the Wolfson Family Charitable Trust; the Abisch Frenkel Foundation for the Promotion of Life Sciences; the Leona M. and Harry B. Helmsley Charitable Trust; Sam Revusky, Canada; the Florence Blau, Morris Blau and Rose Peterson Fund; the estate of Ernst and Anni Deutsch; the estate of Irwin Mandel; and the estate of David Levinson. Dr. Amit is the incumbent of the Alan and Laraine Fischer Career Development Chair.

The Weizmann Institute of Science in Rehovot, Israel, is one of the world’s top-ranking multidisciplinary research institutions. Noted for its wide-ranging exploration of the natural and exact sciences, the Institute is home to scientists, students, technicians, and supporting staff. Institute research efforts include the search for new ways of fighting disease and hunger, examining leading questions in mathematics and computer science, probing the physics of matter and the universe, creating novel materials, and developing new strategies for protecting the environment.

 

RESEARCHERS IDENTIFY A BRAIN “SWITCHBOARD” IMPORTANT IN ATTENTION AND SLEEP

From the FMS Global News Desk of Jeanne Hambleton  Embargoed: 14-Aug-2014
Source : NYU Langone Medical Center Citations Cell

Newswise — New York City, August 14, 2014 – Researchers at NYU Langone Medical Center and elsewhere, using a mouse model, have recorded the activity of individual nerve cells in a small part of the brain that works as a “switchboard,” directing signals coming from the outside world or internal memories. Because human brain disorders such as schizophrenia, autism, and post-traumatic stress disorder typically show disturbances in that switchboard, the investigators say the work suggests new strategies in understanding and treating them.

In a study to be published in the journal Cell online Aug. 14, a team led by Michael Halassa, MD, PhD, assistant professor of psychiatry, neuroscience and physiology, and a member of the NYU Neuroscience Institute, showed how neurons in the thalamic reticular nucleus (TRN) — the so-called switchboard — direct sensory signals such as vision from the outside world, and internal information such as memories, to their appropriate destinations.

“We have never been able to observe as precisely how this structure worked before,” says Dr. Halassa. “This study shows us how information can be routed in the brain, giving us tremendous insight into how it might be broken in psychiatric disorders.”

For the study, researchers used a multi-electrode technique to record the activity of individual neurons in the TRN, a thin layer of nerve cells that covers the thalamus, a structure in the forebrain that relays information to the cerebral cortex, the seat of higher-level functions such as learning and language. TRN cells are known to send inhibitory signals to the thalamus, determining which information is blocked.

The activity of TRN cells, the researchers found, depended on whether the mouse was asleep or awake and alert. TRN cells that controlled sensory input were far more active during sleep, particularly during the periods of sleep when brief bursts of fast-cycling brain waves, called spindles, occur. Sleep spindles, which are associated with blocking sensory input during sleep, are known to be diminished among people with autism and schizophrenia.

Dr. Halassa says the new findings suggest that faulty TRN cells may be disrupting the appropriate filtering of information in these conditions. His group is now exploring this filtering process in animal models of schizophrenia and autism.

In experiments with alert mice, Dr. Halassa’s group found that sensory TRN cells fired very little. This suggested that while these neurons block the flow of external information during sleep, they facilitate the flow of information when an animal is awake and alert.

By contrast, TRN cells that control the flow of internal signals behaved in an opposite fashion, firing very little in sleep. This lowered level of activity, Dr. Halassa suspects, may allow memories to form, which is known to occur during sleep. The thalamus has nerve connections to the hippocampus, which plays an important role in learning and memory.

In a second part of the study, Dr. Halassa’s group employed a technique called optogenetics, which uses light to turn nerve cells on and off, to test whether altering TRN nerve cell firing affected attention behavior in the mice.

In one experiment, mice learned to associate a visual stimulus with food. Well-rested mice took just a second or two to find food when a stimulus was presented, while sleep-deprived mice took much longer. By turning on TRN cells that specifically controlled the visual part of the thalamus, as would happen normally in sleep, the rested mice behaved like they were sleep deprived. On the other hand, when the researchers turned off these TRN cells, sleep-deprived mice quickly found the food.

“With a flick of a light switch, we seemed able to alter the mental status of the mice, changing the speed at which information can travel in the brain,” says Dr. Halassa. Mapping brain circuits and disrupting their pathways will hopefully lead to new treatment targets for a range of neuropsychiatric disorders, he adds.

In addition to Halassa, other NYU Langone researchers involved in this study were Zhe Chen, PhD, and Ralf Wimmer, PhD. Additional research support was provided by Philip Brunetti and Matthew Wilson, PhD, at the Picower Institute for Learning and Memory at the Massachusetts Institute of Technology in Cambridge, Mass.; Shengli Zhao, PhD, and Fan Wang, PhD, at Duke University in Durham, NC; Basilis Zukopoulos, PhD, at Boston University; and Emery Brown, MD, PhD, at Harvard University and the Massachusetts Institute of Technology.

About NYU Langone Medical Center:
NYU Langone Medical Center, a world-class, patient-centered, integrated academic medical center, is one of the nation’s premier centers for excellence in clinical care, biomedical research, and medical education. Located in the heart of Manhattan, NYU Langone is composed of four hospitals — Tisch Hospital, its flagship acute care facility; Rusk Rehabilitation; the Hospital for Joint Diseases, the Medical Center’s dedicated inpatient orthopaedic hospital; and Hassenfeld Children’s Hospital, a comprehensive pediatric hospital supporting a full array of children’s health services across the Medical Center — plus the NYU School of Medicine, which since 1841 has trained thousands of physicians and scientists who have helped to shape the course of medical history. The Medical Center’s tri-fold mission to serve, teach, and discover is achieved 365 days a year through the seamless integration of a culture devoted to excellence in patient care, education, and research.

 

COOL TEMPERATURE ALTERS HUMAN FAT AND METABOLISM

From the FMS Global News Desk of Jeanne Hambleton July 28, 2014                                                                       NIH Research Matters National Institute of Health

 

Men exposed to a cool environment overnight for a month had an increase in brown fat with corresponding changes in metabolism.

The finding hints at new ways to alter the body’s energy balance to treat conditions such as obesity and diabetes.

Humans have several types of fat. White fat stores extra energy. Too much white fat, a characteristic of obesity, increases the risk of type 2 diabetes and other diseases.

Brown fat, in contrast, burns chemical energy to create heat and help maintain body temperature. Researchers have previously shown that, in response to cold, white fat cells in both animals and humans take on characteristics of brown fat cells.

A team led by Dr. Francesco S. Celi of Virginia Commonwealth University and Dr. Paul Lee, now at the Garvan Institute of Medical Research in Australia, explored the effects of ambient temperature on brown fat and metabolism. The study was supported in part by NIH’s National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) and the NIH Clinical Center. Results appeared online on June 22, 2014, in Diabetes.

The researchers had 5 healthy men, average age 21 years, reside for 4 months in a clinical research unit in the NIH Clinical Center in Bethesda, Maryland. The men engaged in regular activities during the day and then returned to their private room each evening. The temperature of the room was set to 24 °C (75 °F) during the first month, 19 °C (66 °F) the second month, 24 °C again for the third month, and 27 °C (81 °F) the remaining month.

The participants were exposed to the temperature for at least 10 hours each night. They wore standard hospital clothing and had bed sheets only. All meals were provided, with calorie and nutrient content carefully controlled and all consumption monitored. At the end of each month, the men underwent extensive evaluations, including energy expenditure testing, muscle and fat biopsies, and PET/CT scanning of an area of the neck and upper back region to measure brown fat volume and activity.

After a month of exposure to mild cold, the participants had a 42% increase in brown fat volume and a 10% increase in fat metabolic activity. These alterations returned to near baseline during the following month of neutral temperature, and then were completely reversed during the final month of warm exposure. All the changes occurred independently of seasonal changes.

The increase in brown fat following a month of cold exposure was accompanied by improved insulin sensitivity after a meal during which volunteers were exposed to mild cold. Prolonged exposure to mild cold also resulted in significant changes in metabolic hormones such as leptin and adiponectin. There were no changes in body composition or calorie intake.

The findings suggest that humans may acclimate to cool temperature by increasing brown fat, which in turn may lead to improvements in glucose metabolism. These changes can be dampened or reversed following exposure to warmer temperatures.

“The big unknown until this study was whether or not we could actually manipulate brown fat to grow and shrink in a human being,” Lee says. “The improvement in insulin sensitivity accompanying brown fat gain may open new avenues in the treatment of impaired glucose metabolism.”

—by Carol Torgan, Ph.D.

Link | Posted on by | Tagged , , , , , , , , , , , , | Leave a comment

HOW EBOLA IS TRANSMITTED

From the FMS Global News Desk of Jeanne Hambleton Posted on August 23, 2014               By Stone Hearth News

Boston, MA – Although the Centers for Disease and Prevention (CDC) reports no known cases of Ebola transmission in the United States, a Harvard School of Public Health (HSPH)/SSRS poll released today (August 21, 2014) shows that four in ten (39%) adults in the U.S. are concerned that there will be a large outbreak in the U.S., and a quarter (26%) are concerned that they or someone in their immediate family may get sick with Ebola over the next year.

The nationally representative poll of 1,025 adults was conducted August 13-17, 2014 by researchers at HSPH and SSRS, an independent research company. The margin of error for total respondents is +/-3.6 percentage points at the 95% confidence level.

Ebola: a brief insight into what it is all about

Ebola hemorrhagic fever is a severe, often fatal disease in humans and nonhuman primates, such as monkeys, gorillas and chimpanzees. Four countries have reported infections: Guinea, Liberia, Nigeria, and Sierra Leone. Officials report 1,350 have died as of August 21, 2014 and over 2,473 people have been infected since March 2014. For an update on the outbreak, see this CDC link: http://www.cdc.gov/vhf/ebola/outbreaks/guinea/index.html

The HSPH/SSRS poll found people with less education are more likely to be concerned about an outbreak in the U.S. (less than high school 50% vs. some college 36% vs. college grad or more 24%). People with less education are also more concerned they or their family will get sick with Ebola (less than high school 37% vs. some college 22% vs. college grad or more 14%). Perhaps related, those with less education are also less likely to be following the news about the Ebola outbreak in West Africa closely (total 63%; less than high school 57% and some college 62% vs. college grad or more 73%).

Two-thirds of people (68%) surveyed believe Ebola spreads “easily” (“very easily” or “somewhat easily”) from those who are sick with it. This perception may contrast with CDC, World Health Organization (WHO), and other health experts who note that Ebola is not an airborne illness, and is transmitted through direct contact with infected bodily fluids, infected objects, or infected animals. For more on how Ebola is transmitted: http://www.cdc.gov/vhf/ebola/transmission/index.html

A third of those polled (33%) believe there is “an effective medicine to treat people who have gotten sick with Ebola.” According to the CDC and WHO, there is no proven anti-viral medicine, however, treating symptoms – such as maintaining fluids, oxygen levels, and blood pressure – can increase the odds of survival. To date, the media reports two people infected with Ebola overseas have been treated in the U.S.

“Many people are concerned about a large scale outbreak of Ebola occurring in the U.S.,” said Gillian SteelFisher, PhD, deputy director of the Harvard Opinion Research Program and research scientist in the HSPH Department of Health Policy and Management. “As they report on events related to Ebola, the media and public health officials need to better inform Americans of Ebola and how it is spread.”

For more information about the disease, see the CDC’s Questions and Answers about Ebola: http://www.cdc.gov/vhf/ebola/outbreaks/guinea/qa.html

WHO information: http://www.who.int/csr/disease/ebola/en/

Harvard School of Public Health brings together dedicated experts from many disciplines to educate new generations of global health leaders and produce powerful ideas that improve the lives and health of people everywhere. As a community of leading scientists, educators, and students, we work together to take innovative ideas from the laboratory to people’s lives—not only making scientific breakthroughs, but also working to change individual behaviors, public policies, and health care practices. Each year, more than 400 faculty members at HSPH teach 1,000-plus full-time students from around the world and train thousands more through online and executive education courses. Founded in 1913 as the Harvard-MIT School of Health Officers, the School is recognized as America’s oldest professional training program in public health.

SSRS is a full-service survey and market research firm managed by a core of dedicated professionals with advanced degrees in the social sciences. SSRS designs and implements solutions to complex strategic, tactical, public opinion, and policy issues in the U.S. and in more than 40 countries worldwide. SSRS partners with clients interested in conducting high-quality research. SSRS is renowned for its sophisticated sample designs and its experience with all modes of data collection, including those involving multimodal formats. SSRS provides the complete set of analytical, administrative and management capabilities needed for successful project execution.

 

IBUPROFEN POSING POTENTIAL THREAT TO FISH

From FMS Global News Desk of Jeanne HambletonPosted on August 22, 2014    By Stone Hearth News – Source University of York – Environmental International

Research led by the University of York UK suggests that many rivers contain levels of ibuprofen that could be adversely affecting fish health.

Using a new modelling approach, the researchers estimated the levels of 12 pharmaceutical compounds in rivers across the UK. They found that while most of the compounds were likely to cause only a low risk to aquatic life, ibuprofen might be having an adverse effect in nearly 50 per cent of the stretches of river studied.

The results of the study, which involved York’s Environment Department, the Centre for Ecology and Hydrology, F. Hoffmann-La Roche Ltd (Switzerland) and the Food and Environment Research Agency (Fera), are reported in the journal Environment International.

In what is believed to be the first study to establish the level of risk posed by ibuprofen at the country scale, the researchers examined 3,112 stretches of river which together receive inputs from 21 million people.

Professor Alistair Boxall, from the University of York’s Environment Department, said: “The results of our research show that we should be paying much closer attention to the environmental impacts of drugs such as ibuprofen which are freely available in supermarkets, chemists and elsewhere.”

The researchers have developed a combined monitoring and modelling approach that takes into account factors such as the non-use of prescribed drugs by patients, and addresses differences in metabolism in individuals who are using a drug. The new approach also accounts for removal processes in the local sewerage network and for differences in the effectiveness of different wastewater treatment technologies. In this way, it provides more accurate estimates of the concentrations of compounds entering rivers than previous modelling approaches.

Richard Williams, from the Centre for Ecology & Hydrology (CEH) – a public-sector research centre which is part of the Natural Environment Research Council (NERC), said: “When we compared the results of our modelling with available monitoring data for pharmaceuticals in the UK, we were delighted at the close agreement between the modelled and measured data.”

Professor Boxall added: “While our study focused on pharmaceuticals, the approach we have developed could also be valuable in assessing the risks of other ‘down the drain’ chemicals and could help inform our understanding of the important dissipation processes for pharmaceuticals in the pathway from the patient to the environment.”

 

CITIES ARE MAKING SPIDERS GROW BIGGER AND MULTIPLY FASTER

From FMS Global News Desk of Jeanne Hambleton Posted on August 20, 2014    Stone Hearth News  By Nick Stockton   Permalink WIRED

Something about city life appears to be causing spiders to grow larger than their rural counterparts. And if that is not enough to give you nightmares, these bigger urban spiders are also multiplying faster.

A new study published today in PLOS One shows that golden orb weaver spiders living near heavily urbanized areas in Sydney, Australia tend to be bigger, better fed, and have more babies than those living in places less touched by human hands.

The study’s authors collected 222 of the creatures from parks and bushland throughout Sydney, and correlated their sizes to features of the built and natural landscape.

They dissected each specimen back at the lab, and determined its size, health, and fecundity by measuring four attributes: the length of the spider’s longest leg segment, the ratio of that leg segment to overall body weight, the amount of fat on the spider, and its ovary size.

To measure urbanization, the authors looked primarily at ground cover throughout the city, at several scales, where they collected each spider: Are surfaces mostly paved? Is there a lack of natural vegetation? Lawns as opposed to leaf litter?

“The landscape characteristics most associated with larger size of spiders were hard surfaces (concrete, roads etc) and lack of vegetation,” said Elizabeth Lowe, a Ph.D student studying arachnids at the University of Sydney.

Humped golden orb weavers are a common arachnid along Australia’s east coast. They get their name from their large, bulging thorax, and the gold silk they use to spin their spherical webs. They typically spend their lives in one place, constantly fixing the same web (which can be a meter in diameter). Each web is dominated by a single female, though 4 or 5 much smaller males usually hang around the edges of the web, waiting for an opportunity to mate (only occasionally does the female eat them afterwards).

Paved surfaces and lack of vegetation mean cities are typically warmer than the surrounding countryside. Orb weavers are adapted to warm weather, and tend to grow bigger in hotter temperatures. The correlation between size and urban-ness manifested at every scale. Citywide, larger spiders were found closer to the central business district. And, their immediate surroundings were more likely to be heavily paved and less shady.

More food also leads to bigger spiders, and the scientists believe that human activity attracts a smorgasbord of orb weavers’ favorite prey. Although the study was not designed to determine exactly how the spiders were getting bigger, the researchers speculate that things like street lights, garbage, and fragmented clumps of plant life might attract insects. They also believe that the heat island effect might let urban spiders mate earlier in the year, and might even give them time to hatch multiple broods.

The orb weavers could also be keeping more of what they catch. Because they are such prolific hunters, orb weavers’ webs are usually home to several other species of spiders that steal food. The researchers found that these little kleptos were less common in webs surrounded by pavement and little vegetation.

Lowe says quite a few species of spider are successful in urban areas, and she would not be surprised if some of these other species were also getting bigger. Despite how terrifying this sounds, she assures me that this is actually a good thing.

“They control fly and pest species populations and are food for birds,” she said.

 (I do not like spiders  and the idea of them getting bigger does not make me happy. Glad I do not live in Sydney with those beasties.)  Back tomorrow,  Jeanne
Link | Posted on by | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment