This article from the open-access journal Atmospheric Chemistry and Physics was actually rejected to be published by the journal in 2010. Because of the open-access model of the journal, the originally submitted article along with reviewer comments can be viewed despite the article's rejection. In the article, the authors measured the amount of light reflected by the wakes from shipping barges in the northern Pacific Ocean from airplanes. They then used their results to estimate the increase in reflected sunlight from increases in shipping across the Pacific, since this would have a (very) small cooling effect on the climate. Unfortunately, while this was a somewhat clever idea, the reviewers thought that there were far too many uncertainties in both their measurements and calculations so that the reported number was an overestimate of an already small number.
While this article was rejected, because of the open-access nature of the journal, it shows how the scientific process of peer review works. Typically, when science and the scientific method is taught in schools, the curriculum focuses on the experimental process of science. While certainly this is very important, science also has important social ways of processing new information. If a group of scientists do a series of experiments that reach new and interesting conclusions, they will try to get their results published to a scientific journal. (And present this information at conferences, universities, research centers, etc.) Upon submission to a journal, two (or more) other scientists from the same research field will read the initial paper, either recommending that the results be published along with potential changes or that the paper be rejected outright. If the paper is rejected, the authors can appeal to have it accepted, but this is not usually granted. The authors then edit the paper based on the suggestions of the reviewers, and it is resubmitted and published in the next issue of the journal. Usually a reader only sees the final product of this process in a scientific journal, but in the case of an open-access journal such as Atmospheric Chemistry and Physics the entire process is visible to the reader (and the journal is free online). Even articles that eventually are rejected such as this one are immediately put on the journal's website after a quick review process to ensure the article is relevant.
Peer review is an important process to ensure quality as scientists being human after all are prone to misjudgment, bias, or error. While science is constantly changing over time as new ideas or methods are realized, it's important to make sure these ideas are plausible or methods actually work! While there are many cases of new ideas overturning the scientific consensus (I'm looking at you, quantum mechanics) it's often after these new ideas have been thoroughly scrutinized and verified through the process of peer review, discussions at scientific conferences, and independent testing of observations or experiments. The process is not without its faults, of course! Occasionally a scientist will unethically use peer review to block the publication of results that contradict their own. (Yes, it's kind of silly. There is the joke: "The debates in science are so fierce because the stakes are so low." I've seen people spend 15 minutes at a conference arguing about what the proper term for something is.) In those cases, an open access journal such as Atmospheric Chemistry and Physics provides the kind of transparency that makes such cases apparent to improve the scientific process. Hopefully with the growing use of information technology (I know I rarely ever read the actual printed journal) more publishers decide to switch to an open-access model not only for transparency but to improve access to scientific journals to the general public.
Exploring new and interesting findings in atmospheric chemistry on Earth and beyond
Thursday, September 22, 2011
Wednesday, September 14, 2011
Variations in Volcanic Dust High Up
Last month, in Science magazine, Susan Solomon,* an atmospheric chemist at the National Oceanic and Atmospheric Administration (NOAA) and her colleagues presented satellite measurements of sulfate aerosols in volcanic dust in the stratosphere, the upper region of the atmosphere that contains the ozone layer. These measurements showed that the levels of volcanic dust in the stratosphere actually vary significantly even in the absence of major volcanic eruptions such as the Mount Pinatubo eruption in 1991.
Aerosols such as this volcanic dust scatter and reflect light from the sun, thus causing a net cooling of the climate. The volcanic dust in the stratosphere actually increased (mostly from natural volcanic events) enough from 2000 to 2010 to decrease the heat trapped by the atmosphere by 0.1 W/m2. For comparison, the increase in CO2 during the same period increased the heat trapped by the atmosphere by 0.28 W/m2, thus this volcanic dust cancelled out some of the warming that would have occurred from CO2 alone. While this is certainly a good thing as it has slowed the pace of global warming, it is unclear whether this increase in volcanic dust will continue due to the unpredictable nature of volcanic activity. For example, if by 2020 volcanic dust were to return to levels seen in the 1960, any cooling effect would disappear and cause average global temperature to increase by 0.06oC in addition to any changes from increased greenhouse gases. Despite the inherit unpredictability of volcanic activity, understanding that volcanic dust does have variable effects on the climate over time can help better constrain the possible range of any future changes in the climate.
Implications
In a broader sense, aerosols (i.e. dust and liquid droplets floating in air) are an important part of the climate change picture that are often under-discussed compared to poor, infamous CO2. Changes in aerosols caused by humans from land use, transportation (autos, trains, etc.), and industry (smoke stacks) since the industrial revolution have increased the light reflected by the Earth. Much like the volcanic dust in the stratosphere, this has reduced the heat trapped by the atmosphere, partially offsetting the increased heat trapped by greenhouse gases. In the period from 1940 to 1980, the combined effect of these two might have canceled each other out for a time, leading some to speculate about the possibility of global cooling in the 1970's. Of course, this hypothesis has not been borne out by the data since then. Aerosols have negative effects on humans directly through inhalation or indirectly through smog and acid rain, so government regulation of these pollutants has reduced their concentration in the atmosphere. While this is an obviously good thing for human society, it had the unfortunate effect of reducing their cooling effect on the atmosphere!
Studies like this one are especially important for improving climate predictions, as aerosols are the least well-understood part of climate change, especially because of their indirect effects on cloud formation. (See this chart from the IPCC and notice the very large black bars on aerosols compared to other factors.) They're certainly understood enough to predict that the Earth is warming and will continue to warm without any changes in human activity, but predicting how much the climate might change in the future is constrained largely by the uncertainty in the effects of aerosols. Better understanding the impact of aerosols, both human-made and natural, can improve the uncertainty in future predictions of climate. This, in turn, can provide better estimates of the costs and benefits of any potential emissions reductions or even geoengineering.
*Susan Solomon has won the US National Medal of Science for her work on understanding the cause of ozone depletion, and was one of the co-chairs of the physical science report for the International Panel on Climate Change.
Aerosols such as this volcanic dust scatter and reflect light from the sun, thus causing a net cooling of the climate. The volcanic dust in the stratosphere actually increased (mostly from natural volcanic events) enough from 2000 to 2010 to decrease the heat trapped by the atmosphere by 0.1 W/m2. For comparison, the increase in CO2 during the same period increased the heat trapped by the atmosphere by 0.28 W/m2, thus this volcanic dust cancelled out some of the warming that would have occurred from CO2 alone. While this is certainly a good thing as it has slowed the pace of global warming, it is unclear whether this increase in volcanic dust will continue due to the unpredictable nature of volcanic activity. For example, if by 2020 volcanic dust were to return to levels seen in the 1960, any cooling effect would disappear and cause average global temperature to increase by 0.06oC in addition to any changes from increased greenhouse gases. Despite the inherit unpredictability of volcanic activity, understanding that volcanic dust does have variable effects on the climate over time can help better constrain the possible range of any future changes in the climate.
Implications
In a broader sense, aerosols (i.e. dust and liquid droplets floating in air) are an important part of the climate change picture that are often under-discussed compared to poor, infamous CO2. Changes in aerosols caused by humans from land use, transportation (autos, trains, etc.), and industry (smoke stacks) since the industrial revolution have increased the light reflected by the Earth. Much like the volcanic dust in the stratosphere, this has reduced the heat trapped by the atmosphere, partially offsetting the increased heat trapped by greenhouse gases. In the period from 1940 to 1980, the combined effect of these two might have canceled each other out for a time, leading some to speculate about the possibility of global cooling in the 1970's. Of course, this hypothesis has not been borne out by the data since then. Aerosols have negative effects on humans directly through inhalation or indirectly through smog and acid rain, so government regulation of these pollutants has reduced their concentration in the atmosphere. While this is an obviously good thing for human society, it had the unfortunate effect of reducing their cooling effect on the atmosphere!
Studies like this one are especially important for improving climate predictions, as aerosols are the least well-understood part of climate change, especially because of their indirect effects on cloud formation. (See this chart from the IPCC and notice the very large black bars on aerosols compared to other factors.) They're certainly understood enough to predict that the Earth is warming and will continue to warm without any changes in human activity, but predicting how much the climate might change in the future is constrained largely by the uncertainty in the effects of aerosols. Better understanding the impact of aerosols, both human-made and natural, can improve the uncertainty in future predictions of climate. This, in turn, can provide better estimates of the costs and benefits of any potential emissions reductions or even geoengineering.
*Susan Solomon has won the US National Medal of Science for her work on understanding the cause of ozone depletion, and was one of the co-chairs of the physical science report for the International Panel on Climate Change.
Thursday, September 8, 2011
The Decade-long Mystery of Atmospheric Methane
In a recent paper in Atmospheric Chemistry and Physics, scientists from the Institute for Marine and Atmospheric Research Utrecht compared the results from an atmospheric model of methane to observations of methane and its carbon isotope ratios from various atmospheric monitoring stations around the world. (Full disclosure: I worked at IMAU for 3 months on a fellowship with the PI on an unrelated research project) From 1998 to 2006, the amount of methane (CH4) in the atmosphere stopped growing even though emissions from human civilization during this time increased. Based on a model that takes into account the flow of methane into the atmosphere from natural and human sources and the flow of methane out of the atmosphere from the natural sinks, methane should have increased during this time period.
Why would anyone care about methane? Because methane absorbs infrared radiation (heat) strongly, it is the second most important greenhouse gas on the atmosphere of Earth despite its relatively short lifetime in the atmosphere (~9 years). Its concentration has actually increased in the atmosphere from 700 parts per billion in 1750 to nearly 1900 parts per billion today. This increase in methane has led to more heat being trapped by the atmosphere, partially contributing to climate change.
Because the sources and sinks of methane are not completely understood, models that try to take all of them into account cannot predict the "leveling off" of methane observed in the atmosphere from 1998 to 2006. The authors use a mathematical model similar to the one displayed below that includes estimated values for the natural sources of methane (mostly from bacteria in wetlands), the human sources of methane (fossil fuel mining/extraction, rice paddies, waste and water treatment, biomass (i.e. wood) burning, and livestock), and the natural sinks of methane (mostly hydroxyl free radicals (OH) produced from water exposed to sunlight). They then do a "sensitivity analysis" where each major source or sink is increased or decreased within the model to show how changes in these can potentially affect the methane calculated by the model. By increasing the sink through increased hydroxyl radical concentrations or by decreasing the source from wetlands, the model calculates a trend similar to atmospheric observations of methane. How can we tell which of these may have caused this change in the trend, though?
This diagram depicts the flow of methane from sources into the atmosphere as well as the sinks that consume methane.
A. Permafrost, Glaciers, and Ice Cores B. Wetlands C. Forest Fires D. Rice Paddies E. Animals F. Plants G. Landfills H. Waste Water Treatment Facilities I. Hydroxyl Radical J. Chlorine Radical
Image by Olivia Shoup and used under the Creation Commons Attribution Share-Alike 3.0 license
Observations and modeling of the carbon-13 to carbon-12 ratio in methane can help provide additional constraints on understanding the sources and sinks of methane. (For more about isotopes and chemistry and how it relates to the atmosphere, see this part of a previous post on a related topic.) Since each source and sink has a relatively different carbon-13 to carbon-12 ratio, any changes to these sources and sinks in the model will also affect the modeled carbon-13 to carbon-12 ratio. These results then can be compared to the atmospheric record of carbon-13 to carbon-12 in methane to see if any change in the estimated value for a source or sink is justified. Using the isotope ratio as a guide, a decrease in wetland methane emissions in the model, while it can bring the methane concentration in the model close to that observed in nature, results in a significant increase in the carbon-13 to carbon-12 ratio, which is not observed. In contrast, an increase in the hydroxyl radical sink will bring the methane in the model close to that observed in nature, but without changing the modeled isotope ratio. Such an increase in hydroxyl radical concentration has been independently proposed, but this hypothesis is hard to confirm since hydroxyl radicals are difficult to observe directly due to their very low lifetime (less than one second).
Using both the atmospheric concentration and carbon isotope ratios of methane, the scientists in this study were able to identify potential causes of the slow down in the growth of methane in the atmosphere from 1998 to 2006. The ways methane is produced or released in the atmosphere may have been reduced during this time period, but this is not necessarily consistent with either the carbon isotope ratios or other outside estimates of these sources. The primary way methane is destroyed in the atmosphere, through reaction with hydroxyl radicals (OH), may have also increased during this time period. This is consistent with the isotope evidence as well as other studies relating to hydroxyl radicals. A combination of these effects is also possible, but the combined effect would need to be consistent with the isotope evidence as well. This kind of modeling study demonstrates how measuring the isotope ratios in an atmospheric gas can be useful for understanding its chemical and biological activity in the atmosphere.
Implications
Using a model such as this along with atmospheric observations, scientists can develop a better understanding of how methane or other atmospheric gases move in and out of the atmosphere. This is crucial to understanding the potential range of impacts from a given public policy towards environmental pollutants, whether related to smog or climate change. Because of its short lifetime and significant impacts not only on the climate but as a precursor to photochemical smog (ozone pollution), it has been suggested that reductions in methane (and other pollutants) that are technologically feasible now could "buy time" on climate change while having significant public health benefits. While a certain degree of uncertainty does exist about any form of public policy (who knows if an asteroid might hit the Earth tomorrow or nuclear war could break out?), developing as complete an understanding of the physical basis behind any proposed policy as possible is key to helping best estimate the cost and benefits of any decision.
Why would anyone care about methane? Because methane absorbs infrared radiation (heat) strongly, it is the second most important greenhouse gas on the atmosphere of Earth despite its relatively short lifetime in the atmosphere (~9 years). Its concentration has actually increased in the atmosphere from 700 parts per billion in 1750 to nearly 1900 parts per billion today. This increase in methane has led to more heat being trapped by the atmosphere, partially contributing to climate change.
Because the sources and sinks of methane are not completely understood, models that try to take all of them into account cannot predict the "leveling off" of methane observed in the atmosphere from 1998 to 2006. The authors use a mathematical model similar to the one displayed below that includes estimated values for the natural sources of methane (mostly from bacteria in wetlands), the human sources of methane (fossil fuel mining/extraction, rice paddies, waste and water treatment, biomass (i.e. wood) burning, and livestock), and the natural sinks of methane (mostly hydroxyl free radicals (OH) produced from water exposed to sunlight). They then do a "sensitivity analysis" where each major source or sink is increased or decreased within the model to show how changes in these can potentially affect the methane calculated by the model. By increasing the sink through increased hydroxyl radical concentrations or by decreasing the source from wetlands, the model calculates a trend similar to atmospheric observations of methane. How can we tell which of these may have caused this change in the trend, though?
This diagram depicts the flow of methane from sources into the atmosphere as well as the sinks that consume methane.
A. Permafrost, Glaciers, and Ice Cores B. Wetlands C. Forest Fires D. Rice Paddies E. Animals F. Plants G. Landfills H. Waste Water Treatment Facilities I. Hydroxyl Radical J. Chlorine Radical
Image by Olivia Shoup and used under the Creation Commons Attribution Share-Alike 3.0 license
Observations and modeling of the carbon-13 to carbon-12 ratio in methane can help provide additional constraints on understanding the sources and sinks of methane. (For more about isotopes and chemistry and how it relates to the atmosphere, see this part of a previous post on a related topic.) Since each source and sink has a relatively different carbon-13 to carbon-12 ratio, any changes to these sources and sinks in the model will also affect the modeled carbon-13 to carbon-12 ratio. These results then can be compared to the atmospheric record of carbon-13 to carbon-12 in methane to see if any change in the estimated value for a source or sink is justified. Using the isotope ratio as a guide, a decrease in wetland methane emissions in the model, while it can bring the methane concentration in the model close to that observed in nature, results in a significant increase in the carbon-13 to carbon-12 ratio, which is not observed. In contrast, an increase in the hydroxyl radical sink will bring the methane in the model close to that observed in nature, but without changing the modeled isotope ratio. Such an increase in hydroxyl radical concentration has been independently proposed, but this hypothesis is hard to confirm since hydroxyl radicals are difficult to observe directly due to their very low lifetime (less than one second).
Using both the atmospheric concentration and carbon isotope ratios of methane, the scientists in this study were able to identify potential causes of the slow down in the growth of methane in the atmosphere from 1998 to 2006. The ways methane is produced or released in the atmosphere may have been reduced during this time period, but this is not necessarily consistent with either the carbon isotope ratios or other outside estimates of these sources. The primary way methane is destroyed in the atmosphere, through reaction with hydroxyl radicals (OH), may have also increased during this time period. This is consistent with the isotope evidence as well as other studies relating to hydroxyl radicals. A combination of these effects is also possible, but the combined effect would need to be consistent with the isotope evidence as well. This kind of modeling study demonstrates how measuring the isotope ratios in an atmospheric gas can be useful for understanding its chemical and biological activity in the atmosphere.
Implications
Using a model such as this along with atmospheric observations, scientists can develop a better understanding of how methane or other atmospheric gases move in and out of the atmosphere. This is crucial to understanding the potential range of impacts from a given public policy towards environmental pollutants, whether related to smog or climate change. Because of its short lifetime and significant impacts not only on the climate but as a precursor to photochemical smog (ozone pollution), it has been suggested that reductions in methane (and other pollutants) that are technologically feasible now could "buy time" on climate change while having significant public health benefits. While a certain degree of uncertainty does exist about any form of public policy (who knows if an asteroid might hit the Earth tomorrow or nuclear war could break out?), developing as complete an understanding of the physical basis behind any proposed policy as possible is key to helping best estimate the cost and benefits of any decision.
Wednesday, September 7, 2011
Extreme cold and global warming
"Hey! There was some crazy blizzards in the US this year and the last. How can you scientists still say that the Earth is warming when it was unusually cold in the winter of 2010 and 2011?" This is a common kind of question from laymen in regards to the feasibility of climate change. Luckily, some scientists actually looked at this in a recent article in Geophysical Research Letters. (gated beyond the abstract, unfortunately)
These scientists divided the Northern hemisphere into eight regions: USA, Canada, Alaska/Yukon, Siberia, Far East, Central Asia, Nothern Europe/Russia, and Mediterranean/Middle East. Then, using statistical techniques, they examined the temperature record for extreme cold and warm events for the winters of 2010 and 2011 and compared it to the historical record for 1950-2009. Indeed, the USA, Northern Europe, and Siberia did experience winters that were unusually cold in 2010 and 2011. However, the number and extent of unusually cold days in these two regions was offset by an even greater number of unusually hot days in every other region of the Northern Hemisphere! Not only were there are larger number of extremely hot days, but these hot spells also lasted much longer and were more extreme compared to any of the extremely cold days in the United States, Europe, or Siberia.
What accounted for the cold extremes experienced in parts of the Northern Hemisphere then? Similar to the natural El Niño/El Niña phenomenon in the Southern Pacific ocean, another natural phenomenon called the North Atlantic oscillation causes variability in local climates in the Northern Hemisphere. During the winters of 2010 and 2011, this oscillation exhibited an unusual and persistent pattern that caused cold air from the Arctic to move down to the United States and Europe, while warm air traveled up to the Arctic. This made it so that while much of North America and Europe was unusually cold, the Arctic was, in fact, much warmer during the winter. Even now, the total area of Arctic sea ice is near record lows this year.
In contrast, the extremely hot weather in the rest of the Northern Hemisphere cannot be explained by natural variations in climate, but require an outside explanation. This paper demonstrates that while there may be locally extreme cold events in weather than run counter to the overall trend of anthropogenic climate change, the overall trend for the entire planet is warming.
These scientists divided the Northern hemisphere into eight regions: USA, Canada, Alaska/Yukon, Siberia, Far East, Central Asia, Nothern Europe/Russia, and Mediterranean/Middle East. Then, using statistical techniques, they examined the temperature record for extreme cold and warm events for the winters of 2010 and 2011 and compared it to the historical record for 1950-2009. Indeed, the USA, Northern Europe, and Siberia did experience winters that were unusually cold in 2010 and 2011. However, the number and extent of unusually cold days in these two regions was offset by an even greater number of unusually hot days in every other region of the Northern Hemisphere! Not only were there are larger number of extremely hot days, but these hot spells also lasted much longer and were more extreme compared to any of the extremely cold days in the United States, Europe, or Siberia.
What accounted for the cold extremes experienced in parts of the Northern Hemisphere then? Similar to the natural El Niño/El Niña phenomenon in the Southern Pacific ocean, another natural phenomenon called the North Atlantic oscillation causes variability in local climates in the Northern Hemisphere. During the winters of 2010 and 2011, this oscillation exhibited an unusual and persistent pattern that caused cold air from the Arctic to move down to the United States and Europe, while warm air traveled up to the Arctic. This made it so that while much of North America and Europe was unusually cold, the Arctic was, in fact, much warmer during the winter. Even now, the total area of Arctic sea ice is near record lows this year.
In contrast, the extremely hot weather in the rest of the Northern Hemisphere cannot be explained by natural variations in climate, but require an outside explanation. This paper demonstrates that while there may be locally extreme cold events in weather than run counter to the overall trend of anthropogenic climate change, the overall trend for the entire planet is warming.
Subscribe to:
Posts (Atom)