Tag Archives: carbon dioxide

All in it together willy- nilly

by Roger Sweeny

From a newsletter to parishioners of St. Francis-in-the-Wood Anglican Church, West Vancouver, B.C., June 2013


During Pentecost and now, as we ponder the Trinity, the message has been about opening our hearts to receive the Spirit of Truth, the unseen one who will walk with us, live in us, inspire us to become all we can be, to act not just for self but for others too, and to be ever respectful of all living things including Mother Earth.

A snatch of monologue from a 60-year old radio show, “Dragnet” comes to mind. That was the programme where they told us “Ladies and gentlemen: the story you are about to hear is true. Only the names have been changed to protect the innocent.”  canmore-flooding1And then “Knock, knock.  Yes?  My name is Friday. I’m a cop. Just the facts, Ma’am, just the facts.

My Canadian College Dictionary defines a fact as ‘something that actually exists; a reality; a truth’.  The following are my thoughts on a looming reality we wish we hadn’t heard about. It can no longer be put behind us, and the more we try to ignore the facts, the more difficult it becomes to confront them. Call me an alarmist old grump if you like. Yes, guilty. Yet something compels me to speak out – to give a ‘heads up’ to what’s coming at us – because if I don’t I shall never forgive my inaction.

The fact is – I’m deeply concerned that Mum’s not well. I mean our Earth Mum – Gaia. Her lungs are congested, temperature elevated, her breathing laboured, she sweats a lot and is becoming very moody. It’s a case of Elder abuse.

To underscore my concerns, here in point form are a handful of facts that I have gleaned from quite a few well respected sources. Taken together they paint a sobering picture of what Mother Earth almost certainly has in store for our successors. They will not thank us.

ATMOSPHERIC CO2:co2_trend_mlo

Analysis of core samples from 600,000 years down in the Greenland ice cap shows that the concentration of the greenhouse gas CO2 in the atmosphere ranged between 180 and 280 parts per million (ppm)up until the industrial age. Since then, and particularly since 1950, it has risen dramatically. It reached 400 ppm on 9 May 2013 at the Mauna Loa monitoring station in Hawaii which measures global mean CO2 concentrations. A growing number of environmental scientists hold that a CO2 concentration greater than 350 ppm is incompatible with life on Earth, and that we must get it back below that level as soon as possible.


The consensus among climate scientists at the 2009 United Nations Climate Change Conference was that, in order to avoid a chain reaction of climate-related natural disasters, average future global (surface and ocean) temperature increases should be held to less than 2 degrees C above pre-industrial levels.  In fact, that is about all they agreed on. 201101-201112The average temperature has already risen by 0.8 degrees over pre-industrial levels, and is projected to rise close to another full degree due to heat- trapping greenhouse gases already in the atmosphere. A report issued by the World Bank last year confirmed the world is on track for a 4 degree C temperature increase by 2100. Even a 2 degree rise is viewed by world renowned ex- NASA scientist James Hansen as a recipe for long term disaster.

Some argue there has been a pause in surface warming since 1998. Not so. NASA confirms that the observed temperature data, corrected for periods of volcanic activity (which cools), occurrences of El Nino (which warms) and La Nina (which cools), variations in solar activity and natural weather variations clearly show that human-induced global warming has continued to increase in line with projections over the past 16 years.

ARCTIC ICE MELT:2012_8$largeimg202_Aug_2012_121200637

The volume of Arctic sea ice in the summer of 1979 was measured at about 17,000 cubic kilometres. Last summer it was about 3,000 cubic kilometres. At this rate of melting, the Arctic could be ice-free by summer 2015. The Greenland ice cap is currently losing volume at the rate of 100 cubic kilometres per year. The West Antarctic ice cap, which contains 2.2 million cubic kilometres of ice, is warming three times faster than the rest of the world.

Scientists calculate a 2 degree increase would melt enough ice to raise global ocean levels by between 7.5 and 9 metres.


U.S. environmentalist and author Bill McKibben lays out in his new filmDo the Math” what must be done to prevent a runaway environmental calamity.

  • To have an 80% chance of keeping the rise in global temperature below 2 degrees the world economy can release only 565 gigatons of CO2 into the atmosphere by 2050.
  • Known global reserves of coal, oil and natural gas contain 2795 gigatons of CO2.
  • At current rates of fuel production and growth, the 565 gigatons allowance could be used up in just 16 years (i.e. by 2028).

Simply put, fossil fuel reserves are five times as great as the world can afford to burn. To avoid calamity we must leave 80% of it in the ground.


So there are just a few facts, but the implications for Canada are huge. So is the incentive to push for a non- fossil fuel economy without delay. As Christine Lagarde of the International Monetary Fund expressed it: “If we don’t act now, future generations will be toasted, roasted, grilled and fried. “  So what are we waiting for?  We are all in this together.

May this be the start of a wider discussion at St. Francis.

Connections: a tale of two Smiths

by Stan Hirst

It’s a funny old world. The simplest things can turn out to be complicated once you start examining and  questioning. Which is often why most people avoid talking about issues and ideas and instead concentrate on simpler things. Like themselves.

Just such a question arose earlier this past year, prior to the Canadian federal elections. The Suzuki Elders drafted a memo to federal politicians urging them to consider the long-term effects of reckless resource exploitation on our grandchildren’s future. We were thinking specifically of climate change and actions which lead up to it, e.g. massive additions of carbon to the global atmosphere from Canadian sources such as the tar-sands. But, someone observed, most elected politicians also have children, and grandchildren too in some cases. Why aren’t they equally concerned about these issues and the future?

I actually tried to find out the answer to that. I wrote a letter to my MP (a Progressive Conservative) who has two children in high school and asked him that very question. The response was somewhat underwhelming. He thanked me for my continued support and urged me to contribute to the party coffers.

So let me try an analytical approach. I know two individuals who typify very different environmental  attitudes. I am going to examine their stories and see if I can detect any significant contributory factors.

Denzel Smith is a 36 year-old graphics designer, married with two small children, and owns a heavily-mortgaged house in Dunbar, Vancouver. He also owns a 10-year-old Toyota, two mountain bikes and a 52-inch television set. He has hiked the West Coast Trail a dozen times, and in summer hauls his wife and kids around the province on camping trips. He is a member of one racquet-ball club and three organizations which promote environmental conservation and green living. Denzel has attended numerous protest meetings and demonstrations against hot topics like the tar sands, oil pipelines and tanker traffic along the B.C. coast. He identifies with the underlying driving forces behind the current Occupy Movement, but considers the implementation as hopelessly misguided and ineffectual.

Just 1350km to the east, in Rosedale, Calgary, lives Justin Smith, aged 33. He is a part-owner of an electronics supply store. He too is married, and has two small children. He has a mountain bike and lots of other toys as well, including a Toyota FJ Cruiser, a Kawasaki Ninja 1000 which his wife detests, and a spanking new powerboat which spends eight months of the year behind his garage swathed in a blue tarpaulin. Justin is a member of a winter health club. In summer he rides, boats and jogs, sometimes with his family, sometimes alone. Justin supports oil and gas development in his home province, including the  pipelines being proposed to carry tar-sands oil to the U.S. and to Asia via terminals on the B.C. coast. Although he doesn’t do business with the oil industry, he is disdainful of west coast environmental groups who oppose energy developments in Alberta, referring to them as wackos, parasites and socialistic job destroyers.

These two Smiths typify two different attitudes to the environment.   Justin regards the natural world as an opportunity to test his mettle – a muddy track to be conquered by four-wheel drive, a lake to be crossed at full throttle, a prairie highway to be covered at the fastest speed possible on two wheels. He sees resource development and extraction in any form as economically imperative, necessary for progress and something which should logically be entrusted to private enterprise. He maintains that the critical bottom line will always point the way to a safe and appropriate scope of development.

Meanwhile, back in Lotus Land, Denzel thinks of his environment as a fabric, something in which he can immerse himself. He uses his bike as an exploration device. He knows every nook and cranny of the Endowment Lands that he rides through. He can identify a few hundred bird species and just about every common tree and native plant he encounters on his hikes. He is content to spend hours sitting on a rock next to a creek staring at everything or nothing in particular while the kids play in the rock pools. He is an urban dweller and a typical user of materials and resources that a modern life-style requires. He has no strong feelings about most developments,  he just objects strongly to single-focussed massive exploitation with huge impacts and huge implications for other users and for long-term sustainability.

One thing I forgot to mention about these two Smiths. They are brothers. Both born, raised and schooled in Prince Albert, Saskatchewan, where their parents still live. Both brothers attended the University of Regina, from where Denzel made his way across the Rockies to B.C., while Justin chose the shorter hop to Calgary. The Smith boys see each other once or twice a year, usually at Thanksgiving and usually at their parents’ home. Still one big happy family, although things can get heated if someone steals the last piece of pumpkin pie or mentions things like tar-sands or fracking.  So the Smiths are brothers and share the same parents, same upbringing, same schools, same social backgrounds, but they differ totally in their environmental perceptions.

When you search textbooks and web pages to find a basic reason or set of reasons why individuals differ in their fundamental attitude to the environment, you usually encounter the name of the late Lynn White, a professor of history. In a much quoted 1967 essay White famously targeted Christianity as the root cause of environmental degradation because of its core beliefs that humans are fundamentally distinct from the rest of nature, and that nature is present merely to serve human ends. He contrasted this with pagan animism in which all things are deemed to possess, or be associated with, life spirits and this leads to an associated level of moral constraint. White’s often-quoted thesis understandably has caused much ecclesiastical furore and a large number of rebuttals over the years. From my perspective I think the professor may have been a little too cloistered down at UCLA. A tour through Hindu India, Muslim Indonesia or communist China might have convinced him otherwise. In any event, the religious aspect doesn’t figure in my Smithian analysis – the last time either Smith boy ventured near a church was in 1998 when Grandma was laid to rest.

Amongst the published rebutters of White, the name of Lewis Moncrieff is most often cited. His proposition is that environmental attitudes have their roots not in theology but in the kind of western culture that has developed over the past few centuries. Two key revolutionary changes laid the foundations for the evolution of modern society – (1) a trend towards more equitable  distribution of power and wealth by evolving  democratic political structures, and (2) dramatic increases in the production of goods and services through scientific and technological development. As a consequence of industrialization, people moved from the country into metropolitan centres, increased the demand for goods and services, and increased the density of the by-products of human consumption (e.g. pollution, habitat loss, etc.).

I can relate both Smiths to this theory, but at different levels. Denzel’s worldview exemplifies the first part, i.e. more equitable  distribution of power and wealth by evolving  democratic political structures, although Denzel himself would argue that the trend now seems to be the other way. Justin reflects the second part – increasing the production of goods and services through scientific and technological development.

For Justin, as with so many people today, the end point is what counts. He is focused on the outputs of the industrial process – the cars, the bikes, the cell phones, the toys, the wine, the food. The consumerist credo of the 21st century tells him that’s just great and urges him to buy a few more goodies. Or sell a few more from his business. The processes by which all these products are created and the by-products of their creation such as wastes and industrial emissions don’t generally show up on his radar, and those that do are dismissed as ‘collateral damage’. He uses cool military jargon he picked up from playing Modern Warfare 2 on his Xbox. For Justin, big energy developments such as the tar sands and long range oil pipelines are triumphs of technological innovation, drivers of employment and the economy at all levels – local, provincial, national and even international for those lucky countries queuing up to buy Canada’s oil.

Denzel’s primary focus, on the other hand, is the hugely complex system which provides all these material benefits. He is all too aware of the vast array of interconnections in the real world. All the components that go into Justin’s cars, bikes and electronics, and all the ingredients needed to make the food and drink he consumes come from somewhere and are themselves part of complex production and extraction processes. The materials all have to go somewhere after they’ve been used, consumed, excreted, trashed or crashed into a tree. The modern industrial world is running out of absorption capacity for all this stuff – the garbage dumps are full, the oceans can’t take any more plastic and effluents, the atmosphere’s carbon load is starting to show up as bad news for the climate.  Denzel sees the signs and evidence all around him – he notices such things.

Denzel doesn’t dispute the value of resources or the jobs their extraction and transportation generate. He just doesn’t think the material benefits are worth the massive environmental and social costs. He thinks the whole concept of exporting tar sands oil is illogical anyway. While Canada spends billions in energy conservation and other programmes to try and keep carbon emissions as low as possible, it sells huge quantities of high-carbon oil to countries who burn it and dump more carbon back into the global climate than Canada saves though conservation programmes in the first place.

So when the inevitable question comes from across the Thanksgiving dinner table “What else you got in mind, dude? You got another way of converting lots of oil, which we have, into dollars and jobs which we want?” Denzel quietly helps himself to the last pour of wine in the bottle and replies “Leave the damned stuff where it is. It’s been lying there for a hundred million years. It will keep until we have better technology, of which you’re so fond, to make more intelligent use of it”.

While neuroscientists are pushing the boundaries of their science and uncovering the highly complicated relationship between neural pathways and behavioural patterns, geneticists and molecular biologists have developed equally spectacular technologies and methods for linking human behaviour to specific genes and genetic patterns. Mark my words –its only a matter of time before scientists uncover a Green Gene. Denzel has it and Justin doesn’t. Probably as simple as that.

The pros and cons of nuclear power versus coal

by Peggy Olive

In an ideal world, inexpensive, reliable, and safe sources of green energy would abound, and we could avoid using energy derived from either nuclear fission or coal burning. But we’re not there yet, and with climate change already affecting life on our planet, most of us believe that we need to move quickly to using clean energy sources to limit the rise in global temperature caused by greenhouse gas emissions.

In a talk on energy and climate entitled, “Innovating to Zero”, Microsoft’s Bill Gates gives a compelling argument for why we need nuclear power in an age of increasing levels of atmospheric CO2 [1]. Using a simple equation, he argues that CO2 is a product of the number of people on the planet, the services delivered per person, the energy needed per service, and the amount of CO2 produced by each unit of energy. The first two are heading up and are unlikely to be stopped. The cost of energy is decreasing, but not enough. So that leaves the fourth factor. We must use energy that does not produce greenhouse gases, but we need reliable energy – energy that’s available when the sun doesn’t shine and the wind doesn’t blow. Gates believes that nuclear power offers this promise and should be part of the mix, especially if improved (safer) technology is employed. Energy conservation should be a viable way to transition from dirty to clean energy, but increases in services delivered per person along with a growing population would quickly eat up conservation savings.

Like coal power, nuclear power is economical and does not fluctuate as much as wind or solar power. Unlike coal, it is considered clean in terms of the amount of greenhouse gas emissions produced by the power plant itself, although uranium mining and processing are not without risks and environmental impact. But the public is overly fearful of nuclear power, seeing it as an accident waiting to happen and, when it does, likely to adversely affect millions. Of equal concern, radioactive wastes from power plants accumulate and represent a threat by terrorists willing to handle the material, but this has not yet occurred. Accidents at nuclear power plants have the potential to be dangerous to the local population and environment as we’ve recently appreciated with the Fukushima disaster, and once long-lived radioactive elements like cesium-137 and strontium-90 are released, they can contaminate the surrounding land for decades. A case in point, the a 30 km exclusion zone surrounding Chernobyl remains empty of people twenty-five years after that disaster.

Fortunately, nuclear power plant “accidents” that spread deadly isotopes are rare, and the planet has suffered only two (avoidable) serious events that rank at the top of the International Nuclear Event Scale. As serious as these events were, there were few immediate deaths. At Chernobyl, the nuclear core of a poorly designed and operated reactor exploded and was cast outside the facility. Thirty-two radiation workers died shortly after radiation exposure at Chernobyl. At Fukushima Daiichi, in spite of IAEA concerns, an older reactor was operating without adequate safety precautions to ensure reactor coolant in the event of an earthquake and tsunami. No one has died from acute radiation poisoning at Fukushima. Other than thyroid cancers (which are mitigated by potassium iodide tablets and easily treated) increases in the incidence of other types of cancer have not been conclusively linked to radiation from the Chernobyl accident [2]. Cardis and colleagues [3] estimated that “of all the cancer cases expected to occur in Europe between 1986 and 2065, around 0.01% may be related to radiation from the Chernobyl accident”. Although a tiny percentage, this still represents a large number of excess cancer cases, more than 5000 to date. However, air pollution is estimated to end life prematurely in at least 17,000 US citizens per year [4] and up to 850,000 globally [5]. A 2002 analysis by the International Energy Association concluded that nuclear power ranked much lower than coal in terms of impact on biodiversity, accidents, and health risks, and only ranked higher on risk perception [6].

When seen in comparison to the risks of deriving energy from burning coal, the evidence that deriving energy from nuclear power is dangerous remains relatively weak. It is the perceived threat that is strong, and this threat recently caused Germany to close eight of their nuclear power plants and to begin to phase out the remaining nine by 2022. Although the intent is to generate energy cleanly, almost half of the energy in Germany currently comes from coal, and it is difficult to believe that this percentage will not rise in the next few decades, thus contributing further to global warming.

Coal-derived power, in addition to being a major contributor to greenhouse gas emissions and acid rain, is hardly safe. Thousands of coal miners die in accidents each year, and the public is susceptible to lung and heart effects from air-borne pollutants. In 2000, the Ontario Medical Association declared air pollution “a public health crisis” [7] and coal-fired power plants as the single largest industrial contributors to this crisis, producing carbon dioxide, fine particulates, and cancerous heavy metals including mercury. In 2005, the Ontario Medical Association estimated that air pollution costs the province more than six hundred million dollars per year in health care costs, as well as causing the premature deaths of thousands of Ontarians each year [8]. Although of little health consequence, it is worth noting that burning coal produces fly ash that concentrates natural radioactive isotopes in excess of levels produced by nuclear power plants under normal operating conditions [9]. Disposal of toxic coal combustion wastes, orders of magnitude larger in volume than nuclear wastes, has also come under scrutiny [10].

We constantly accept risks in our lives without giving it much thought. A person who smokes twenty cigarettes a day over their lifetime would shorten their life, on average, by six years. A person currently living 50 km from Fukushima who is exposed to an extra 3 mSv per year over their lifetime (the average background exposure is now greater than 3 mSv per year thanks to medical imaging) would shorten their life by 15 days [11]. What cannot be easily evaluated, and is therefore ignored in these risk assessments, is the psychological trauma to evacuees and to those who fear the consequences of minimal radiation exposure because they do not comprehend the risks. Wild animals, ignorant of continuing radioactive decay, are now thriving in the Chernobyl exclusion zone [12].

Economic arguments favour the use of coal over nuclear power when waste management and decommissioning are taken into account. Nuclear plants are very expensive to build (and dismantle) although estimated capital costs for advanced coal plants with carbon control and sequestration appear to be on par with costs to build nuclear power plants [13]. The cost to run and maintain coal plants can be higher than nuclear power plants, in part because of the transportation costs of coal. A major concern with both nuclear and coal power plants is that once the plants are built, they are likely to be around for a long time because the infrastructure is so costly to develop. Public pressure will be needed to ensure that these plants are closed as soon as clean energy sources become available.

In summary, although recent events at Fukushima warn us that safety standards and compliance must be improved, nuclear power plants operating normally produce less greenhouse gas and toxic emissions, less global environmental damage, and fewer health issues than coal-burning power plants. Neither represents a safe, sustainable, energy choice, but given a choice between these two, nuclear power comes out on top. According to Walter Keyes, a proponent of nuclear power who has worked as an energy consultant for the Saskatchewan and Federal governments, “If climate change really is the serious global issue that most scientists believe it is, there is a very limited amount of time to fix the problem and we should not be wasting valuable time debating which non GHG (green house gas) generation source is the best – we need them all, desperately!” [14].


1. Bill Gates on Energy: Innovating to Zero! TED talks, February, 2010. http://www.ted.com/talks/bill_gates.html

2. UN Summary of the Chernobyl Forum, Chernobyl’s Legacy: Health, Environmental and Socio-Economic Impacts, IAEA, 2006. http://www.iaea.org/Publications/Booklets/Chernobyl/chernobyl.pdf

3. Cardis E, Krewski D, Boniol M, Drozdovitch V, Darby SC, Gilbert ES, et al. 2006. Estimates of the cancer burden in Europe from radioactive fallout from the Chernobyl accident Inter. J Cancer 119, 1224–1235 (2006).

4. US Environmental Protection Agency, Power plant, mercury and air toxics standards, March, 2011. http://www.epa.gov/airquality/powerplanttoxics/pdfs/overviewfactsheet.pdf

5. World Health Organization. Estimated deaths and DALYs linked to environmental risk factors. http://www.who.int/quantifying_ehimpacts/countryprofilesebd.xls

6. International Energy Agency, Environmental and health impacts of electricity generation, June 2002 (Table 9.9) http://www.ieahydro.org/reports/ST3-020613b.pdf

7. Canadian Medical Association, June 27, 2000. http://www.collectionscanada.gc.ca/eppp-archive/100/201/300/cdn_medical_association/cmaj/cmaj_today/2000/06_27.htm

8. Ontario Medical Association Illness Costs of Air Pollution (ICAP) – Regional Data for 2005. https://www.oma.org/Resources/Documents/d2005IllnessCostsOfAirPollution.pdf

9. McBride JP, Moore RE, Witherspoon JP, Blanco, RE. Radiological impact of airborne effluents of coal and nuclear plants. Science, 202: 1045-1050, 1978.

10. Dellantonio A, Fitz WJ, Repmann F, Wenzel WW. Disposal of coal combustion residues in terrestrial systems: contamination and risk management. J Environ Qual. 39:761-75, 2010

11. U.S. Nuclear Regulatory Commission, Instruction concerning risks from occupational radiation exposure. Regulatory Guide 8.29, Feb. 1996. http://www.nrc.gov/reading-rm/doc-collections/reg-guides/occupational-health/rg/8-29/08-029.pdf

12. Hinton TG, Alexakhim R, Balonov, M., Gentner N, Hendry J, Prister B, Strand P, Woodhead D. Radiation-induced effects on plants and animals: Finds of the United Nations Chernobyl Forum. Health Physics 93: 427-440, 2007.

13. US Department of Energy/Energy Information Administration, Levelized cost of new generation resources in the annual energy outlook 2011. http://www.eia.gov/oiaf/aeo/electricity_generation.html

14. Howell, G and Keyes W, Green (renewable) energy versus nuclear energy. Part five of an eight part written debate regarding nuclear power generation. Mile Zero News and Banner Post, March 17, 2010. http://www.computare.org/Support%20documents/Guests/MZN%20Nuclear%20Debate/5%20of%208%20Green%20Energy%20Howell-Keyes.pdf

Ethanol from trees – a fuelish concept

by Stan Hirst

I have meticulously calculated, on the back of an envelope, that there are least a gazillion trees in British Columbia. And, using another envelope, I figured that if one excludes trees growing in national parks, old-growth forests, recreation areas, parks and other areas where you don’t want guys in plaid shirts and yellow hardhats hanging out with chain saws, then there would still be a humongous number of trees available for harvesting (on a sustainable basis, of course).

I dug out my chemistry notes from a half century ago and found a neat diagram which showed that if you started with a box marked “cellulose” on one side of the page, and then drew lots of squiggly lines and arrows in all directions, you could wind up with another box named “ethanol” on the other side of the page. Now, for those of you who didn’t take chemistry fifty years ago, I can tell you that trees contain lots of cellulose, in fact cellulose makes up about half the mass of the average tree. And, for those of you who maybe don’t get out much, I can divulge that ethanol makes passable motor fuel.

So therefore, using my last envelope, I deduced that if British Columbia set to work making ethanol from wood cellulose, we would end up with enough home-grown fuel to run all our trucks, SUVs, cars and scooters, and still have enough left over to send over to Alberta.

Now, you might ask, why on earth would we want to do that? Well, for a number of reasons, some sound and some less so. One major reason is to replace or reduce the amount of fossil fuels used by motor vehicles and thereby cut back on carbon dioxide emissions, which in turn will lead to a reduction in the overall carbon loading in the atmosphere.

Carbon dioxide (CO2) is one of the exhaust products produced when gasoline is burned as a fuel to power a motor. Burning just 1 litre of regular gasoline produces 2.4 kilograms of CO2 (most of the weight of CO2 comes from the oxygen it contains, not the carbon). CO2 is also produced in large quantities from the combustion of coal, oil and natural gas by power stations, industry and homes. We’ve been spewing the stuff into the atmosphere since the industrial revolution, with the result that the global atmosphere now contains in excess of 390 parts per million CO2, an increase of 40% since the 1800’s. CO2 is a potent greenhouse gas and absorbs a significant part of the long wave radiation beamed upward from the earth’s surface. It then radiates part of that energy back downwards to add to the surface warming effect. More CO2 in the atmosphere thus means more return of heat. As a result, the mean global temperature has risen by 0.8°C over the past century, giving rise to a multitude of climatic changes, many of which may well be irreversible.

There are many ways to reduce CO2 outputs from vehicles. The best way, and you don’t need an envelope to figure this one out, is simply not to use the vehicle at all. That’s not a favoured choice for most of us in the 21st century. The next best way is to use vehicles less or use fewer vehicles by switching to mass transit or pedal power. Other ways are to use really, really fuel efficient vehicles or even vehicles which don’t need petroleum-based fuels, e.g. electric cars. And yet another approach is to switch from petroleum-based fuels to biofuels such as ethanol made from starch, dextrose, cellulose and other green plant-derived feedstocks.

This confuses a lot of folks. Ethanol, like any carbon-based inflammatory substance, also produces CO2 when burned. In fact, a litre of ethanol, when burned, will produce about 2 kg of CO2, only slightly less than the output from the combustion of the same volume of petroleum. But ethanol’s energy content is 40% lower than that of regular fossil-fuel gasoline, so one needs to use more of it to get the same power from a motor, which means more CO2. By one estimate, using ethanol as a motor fuel (blended in with regular gasoline) produces 54% more CO2 per kilometre driven than conventional gasoline alone.

So then, why even think of using ethanol as motor fuel at all? One good reason (which is often overlooked in the ethanol versus gasoline debate) is that the carbon released as CO2 when ethanol fuel is burned is recent carbon which was in the atmosphere (as CO2) just a few growing seasons before the ethanol was distilled from corn, sugar cane or whatever plant feedstock was used. And that released CO2 will be drawn back out of the atmosphere again when the newly replanted crops of corn, sugar cane and the like are actively growing and fixing energy through photosynthesis. Contrast this with the carbon released from fossil fuels which is old. Actually very old – fossil fuels were formed by compaction and anaerobic decomposition of buried dead organisms millions of years ago. This means that CO2 derived from the combustion of petroleum and other fossil fuels is added to the existing carbon loading of the atmosphere, hence the steady rise in global atmospheric CO2 over the last two centuries.

Ethanol production in the U.S.A. has increased exponentially during the past decade, from 6.2 billion litres in 2000 to 13.2 billion litres in 2010. By comparison, Canada’s ethanol production now exceeds 2 billion litres annually, virtually all of it from prairie-based corn and cereal crops. Enerkem operates a small ethanol plant at Westbury, Quebec, using wood waste (used telephone poles) as feedstock.

The proportion of ethanol in commercially available gasoline continues to hover around 10% which is the safe maximum proportion for engines built for regular petroleum-based fuels. Flex-fuel vehicles can use up to 85% ethanol in their fuel and are slowly increasing in availability across North America. Ethanol production from corn has worked so well because it is chemically easy to change the starch in mashed corn into dextrose and from there to ethanol by fermentation.

Many factors drive the recent increase in ethanol production in the U.S., including corn crop subsidization schemes, powerful farming lobbies in Washington, and strategic concerns over oil imports. An increasingly difficult issue for the fuel ethanol industry is the rise in food prices, brought about by land competition between fuel and food production. The Food & Agriculture Organization of the United Nations estimates that about 40% of recent global food price increases are due to land competition between food farming and the ethanol industry in the U.S. and elsewhere. The rest of the increase has been brought about by economic factors, droughts and increasing human populations.

Which, cunningly, brings me back to the subject of trees in British Columbia, which are a vast potential source of ethanol fuel. Not necessarily the whole tree. Waste produced during normal tree harvesting (tree tops, branches, bark) makes up 25-35% of the overall tree volume, and the proportion of waste is increasing as the number of trees damaged or killed by the Pine Beetle epidemic increases. The annual wood waste production in Canada now exceeds 35 million tonnes, of which over 26 million tonnes are produced in British Columbia.

The main constituent of wood, lignocellulose, is composed of three organic components – cellulose, hemicellulose and lignin. The cellulose part is a polysaccharide, i.e. a very long molecular chain of dextrose molecules, which can be converted into ethanol. But the cellulose has first to be separated from the other two components. Industrial plants typically use a variety of chemical treatments involving acids, ammonia, sulphites and/or solvents, as well as physical treatments involving steam or ozone to separate cellulose from lignin in wood-derived feedstocks.

Thanks to a few million years of biological evolution, cellulose it is a very durable substance. Cellulose in discarded cotton in landfills (mainly from “disposable” diapers) has been found to remain intact for more than 20 years. Present-day industrial processes use two basic approaches to break cellulose down into its dextrose components. One approach is to uses chemical hydrolysis, i.e. to attacking the cellulose with diluted acid under condition of high heat and high pressure. The second is to use enzymatic hydrolysis. Enzymes are proteins produced by living organisms and which catalyze (i.e. assist) chemical reactions. The prospect of being able to derive huge quantities of chemical energy in the form of ethanol fuel from cellulose has spurred technological innovation on an impressive scale within the past decades, and there are diverse approaches being developed. Many technology companies are combing chemical and enzymatic treatments to make the cellulose breakdown process faster and more efficient. There are now upwards of 25 companies developing cellulosic ethanol plants in the U.S. In Canada Iogen Corporation has built a demonstration facility near Ottawa to produce ethanol from the cellulose in wheat, barley and oat straw, and Lignol has established a Cellulosic Ethanol Development Centre in Vancouver, consisting of a pilot plant and an enzyme development laboratory.

Would the widespread and increased use of ethanol (and other similar biofuels such as biodiesel made from plant oils) as motor fuel make a significant environmental difference? The only valid way of comparing ethanol and fossil-derived fuels is in terms of their life-cycles which measure energy and carbon balances. This means calculating and comparing the energy and carbon contents of commercially-produced ethanol and oil-derived gasoline from the very start of the manufacturing process all the way through to the final stage when they are burned as fuels and release their energy and carbon. For fossil fuels this is a well-to-wheel analysis, i.e. starting at the oil well where the crude oil is tapped and ending at the gas pump which supplies the motor vehicle with fuel. For ethanol, the process starts with the harvesting and processing of the green plants which are used as feedstock and ends at the gas pump. In both cases, all the energy used by the production processes are included in the analysis, including the energy costs of the refineries, transportation, farming the crops and crop fertilization. They also include the energy values of useful by-products from fuel synthesis such as co-generated electricity and feed for livestock.

The results of these types of analyses show that ethanol production and use as a fuel can indeed reduce overall carbon emissions, but it depends on the way it is produced and used. Overall vehicle CO2 emissions may be hardly impacted at all if the fuel refineries use fossil fuels such as coal or oil as a power source and ethanol added to gasoline does not exceed 10% (the present standard for gasoline in B.C.). On the other hand, total emissions can be cut by more than 50% if refineries are powered by natural gas or, better yet, a waste product such as wood chips, and if ethanol contribution to motor fuel is hiked to 85%.

What life-cycle analysis does not do, at present, is include the costs of dealing with responses and adaptations to climate changes, and deciding how much of those costs are chargeable to the accounts of fossil fuel versus ethanol use

The playing field in the market is not yet level for biofuels. Research and development funding for ethanol development in Canada from 2006 through 2008 was $300 million annually. By comparison, the Alberta Oil Sands currently receive tax subsidies in excess of $1 billion annually. The refinery cost of ethanol is close to 55¢ per litre, but if the price is adjusted for energy content, then ethanol energy costs roughly the same as fossil fuel gasoline energy to produce, i.e. about 90¢ per litre. But there is lots of wood feedstock available, and ethanol fermentation technology is moving ahead by leaps and bounds, so future ethanol production costs are likely to  be trimmed. Fossil fuel production, on the other hand, may face significant future cost increases as supplies of easy oil become ever more difficult to retrieve and refine.

What is the future of ethanol production from wood and its acceptance into the marketplace? I think I need another envelope.

We’re doomed

The following post contains material of a depressing nature, and is unsuitable for readers under 65 years of age. Reader discretion is advised.

First point – the global climate is changing. Not many people dispute that any more. The mean global temperature has risen by 0.8°C over the past century, and the ten warmest years on record have all occurred since 1998. Within the past century many significant climate changes have been measured and reported, including increases in the frequency of heat waves in the U.S., an increasing proportion of precipitation coming in the form of intense, flood-inducing events, an increase in tropical cyclone intensity in the Atlantic Ocean, Caribbean, and Gulf of Mexico, a huge decrease in the seasonal extent of Arctic sea ice, and a big jump in the rate at which glaciers are melting.

The rates of change seem to be accelerating and most of the profound secondary changes are negative. Dr James Hansen, the NASA scientist who first drew international attention to the impending climate disaster, testified way back in 1988 that Earth had entered a long-term warming trend. Today the effects of global warming on the extremes of the global water cycle – stronger droughts and forest fires on the one hand, and heavier rains and floods on the other – have become more evident in Australia, Europe, North America, Africa and Asia.

Second point – the causal factors of climate change are now very well known. Earth is surrounded by a relatively thin layer of greenhouse gases – water vapour, carbon dioxide (CO2), methane and nitrous oxide – which act as a thermal blanket. About half the incoming solar radiation passes through the atmosphere to the Earth’s surface where some is absorbed and the remainder reflected back into the atmosphere. Substantial amounts of the energy absorbed are again radiated outward in the form of infrared heat. These contribute further to the warming of the atmosphere.

Third point – humanity has drastically changed global climatic dynamics by adding huge amounts of CO2, methane, nitrous oxide and chlorofluorocarbons to the atmosphere. Activities such as deforestation, land use changes and the burning of fossil fuels have increased atmospheric CO2 by a third since the Industrial Revolution began. Decomposition of wastes in landfills, burgeoning agriculture, especially rice cultivation, and huge populations of burping and manure-producing domestic livestock have boosted the amounts of methane in the atmosphere by a factor of three since the industrial revolution. Methane is twenty times more active than CO2 in atmospheric heat retention.

The atmospheric concentration of CO2 measured at the Mauna Loa Observatory in Hawaii is a good indicator of where we are now globally in respect of atmospheric change. Back in 1959 when the data collection programme was initiated by the National Oceanic and Atmospheric Administration (NOAA) the CO2 level was measured at 316 parts per million (ppm) and the annual increase was less than 1 ppm. Today the level is over 392 ppm and the annual increases are 2.2 ppm and getting larger all the time.

James Hansen and his climate scientist colleagues concluded that we have either reached, or are very close to, a set of climate “tipping points”. That means that climatic changes are now at a point where the feedbacks from changes spur even larger and more rapid further changes. Hansen cites Arctic sea ice as a good example of this. Global warming has initiated faster sea ice melt and has exposed darker ocean surfaces that absorb more sunlight which leads to more melting of ice. As a result, and without any additional greenhouse gases, the Arctic could soon be ice-free in the summer. The western Antarctic and Greenland ice sheets are vulnerable to even small additional warming – once disintegration gets well under way it will become unstoppable.

Pause for reality check – not only is climatic change a reality, it is progressing at an accelerating rate, the negative consequence are getting greater, and the likelihood of us managing to slow or reverse the negative trends are getting smaller.

Fourth point – James Hansen and his fellow climate scientists looked at the atmospheric CO2 levels, then at the changes in climate which were occurring, and came up with the recommendation that a CO2 level of 350 ppm (last recorded back in 1987) was pretty much the upper allowable limit if massive climatic related adverse effects were to be avoided. The number 350 has a certain appealing ring to it, and has been widely adapted by environmental organizations such as Bill McKibben’s 350.org as a universal target for citizen and government action on carbon emissions. The protagonists are quite aware that the present global atmospheric CO2 level has already overshot that target by more than 40 ppm, but they argue, convincingly, that a reversal is absolutely essential to safeguard our long-term global future.

Fifth point – and now we’re at the crux of the problem. How on Earth, or anywhere else for that matter, do we get anywhere close to reducing the rate at which atmospheric CO2 increases in future, never mind actually reversing the trend towards 350 ppm?

We think of Earth’s carbon reservoirs as being great fields of coal and petroleum compounds, which are more or less stable until we dig them up and burn them. But the globe’s biggest carbon reservoirs are in the atmosphere, the ocean, living ecosystems and soils, and are highly dynamic. They all exchange CO2 with the atmosphere, they both absorb it (oceans) and assimilate it (ecosystems), and they release it (oceans) or respire it (ecosystems). The critical point is that anthropogenic carbon emitted into the atmosphere is not destroyed but adds to the stockpile and is redistributed among the other carbon reservoirs. The turnover times range from years or decades (living plants) to millennia (the deep sea, soil). The bottom line is that any carbon released into the atmosphere is going to be around for a long, long time. Up to 1000 years in fact.

Sixth point – so how do we get from our present scene of 390 ppm CO2 in the atmosphere and impending climate doom to something closer to 350 ppm and a more stable climate scenario? Straight answer – we cannot. We simply don’t have that option.

Seventh point – the absolutely best case scenario for reduction of CO2 emissions to the atmosphere would be an immediate halt to all activities leading to anthropogenic carbon emissions. Park all motor vehicles, no more home heating, no coal-fired power plants, no burning of natural gas, no aircraft flying overhead, shoot and bury 90% of all domestic livestock. Just shut down all of human civilization. No more anthropogenic carbon emissions. Would this sacrifice bring the CO2 level down in a hurry?

Dr Susan Solomon and her colleagues at NOAA, with the help of their sophisticate computer models have addressed that very question. They ran a coupled climate–carbon cycle model which has components representing the dynamic ocean, the atmospheric energy–moisture interaction, and interactive sub-models of marine and terrestrial carbon cycles. The model reveals, sadly for us, that climate change is largely irreversible for 1000 years after all carbon emissions cease. The drop in radiative forcing of atmospheric CO2 (i.e. the extent to which CO2 causes atmospheric warming) is largely compensated by slower loss of heat to the oceans. So atmospheric temperatures do not drop significantly for at least 1,000 years. And the natural interactive processes between the atmosphere, ocean and ecosystems would carry on. Atmospheric CO2 concentration would eventually drop back to 350 ppm by about 2060 and then flatten out to near 300 ppm for the rest of the 1000 years.

Eighth point – I haven’t noticed any great urges on the part of ourselves to go and huddle in caves and gnaw on pine nuts and raw fish (no wood-burning allowed) to make this scenario work, so what is more likely?

Global carbon emissions from fossil fuel use were 6.2 billion tonnes back in 1990 when global CO2 was near 355 ppm. The 2010 estimate is 8.5 billion tonnes. That’s a 38 % increase over the levels used to formulate the Kyoto Agreement. The annual growth rate of emissions derived from fossil fuels is now about 3.5%, an almost four-fold increase from the 0.9% per year for the 1990-1999 period. Carbon emissions from land-use change (i.e. mainly deforestation) in 2007 (in just that one year) were estimated at 1.5 billion tonnes of carbon. The biggest increase in emissions has taken place in developing countries, largely in China and India, while developed countries have been growing slower. The largest regional shift has been that China passed the U.S. in 2006 to become the largest CO2 emitter, and India will soon overtake Russia to become the third largest emitter. Currently, more than half of the global emissions come from less developed countries. Developing countries with 80% of the world’s population still account for only 20% of the cumulative emissions since 1751. There is nowhere for these rates to go, other than up.

When the Intergovernmental Panel on Climate Change produced their Fourth Assessment Report in 2007, they diplomatically tried to hedge their bets. So they churned out 40 different scenarios based on emissions scenarios for the decade 2000-2010 which encompassed the full range of uncertainties related to future carbon emissions, demographic, social and economic inputs and possible future technological developments. The model predictions were correspondingly wide, ranging from “best” to “worst” in terms of atmospheric CO2 levels and changes in the associated climatic driving forces. Now it has become apparent that the actual emissions growth rate for 2000-2007 has exceeded the highest forecasted growth rates for 2000-2010 in their emissions scenarios.

Ninth point – so the most likely future outcomes (by the end of the century) are those at the top end of the scale outputted by the computer models (diagram above). That is to say our grandchildren will be looking at CO2 levels above 900 ppm, mean global temperature rises of 5 or 6 degrees C over what they are today, and an average sea level rise above 0.5 metres. Plus all the storms, cyclones, droughts, floods, vanishing shorelines, water wars and famines that might creep in along the way.

The end – CO2 concentrations in the atmosphere and future temperatures are just numbers, and pretty much the only things that computer models can output. We will have to estimate the extent of global human misery by ourselves.

An elder’s guide to climate scepticism

by Stan Hirst 

The other elders may drive me from the village with brooms and pitchforks when they read my confession. But the truth must out. I am, alas, a sceptic.

I am sceptical, as well as skeptical, that my beloved Earth is going to self-destruct on 31 December 2012. I think it’s more likely the Mayans ran out of wild fig bark on which they were drawing their calendars. I am sceptical that I am by nature diplomatic, charming and easygoing because Jupiter was hanging out with Venus in the Fourth House of the night sky right about the time I came into the world seventy-odd years ago. I am sceptical that the people responsible for the multi-billion dollar homeopathic remedy business have never learned to spell the words p-l-a-c-e-b-o and g-u-l-l-i-b-i-l-i-t-y. And all this scepticism flies in the teeth of the billions of people worldwide who buy into this stuff.

We sceptics are in good company. Albert Einstein was one.  In 1933 he famously stated that black holes do not and cannot exist. He couldn’t see one and couldn’t find the rationale for them in his famous equations. Today his successors have no such problems and not only think they have identified nearly 30 black hole candidates in the Milky Way galaxy but are now getting the proof that the holes behave in the relativistic way that Einstein’s theories predict.

But I’m concerned that we genuine sceptics are being given a bad name by all these so-called climate change and global warming sceptics out there.

We need to address a few issues to sort out these guys in the black hats. Firstly, what exactly is a sceptic? What is climate? And what is climate change and what does it entail?

The Oxford English Dictionary defines a sceptic as one who maintains a doubting attitude with reference to some particular question or statement. Michael Schermer, the entertaining editor of Skeptic magazine enlarges the concept thus:  “Modern skepticism is embodied in the scientific method that involves gathering data to formulate and test naturalistic explanations for natural phenomena. All facts in science are provisional and subject to challenge, and therefore skepticism is a method leading to provisional conclusions. The key to skepticism is to continuously and vigorously apply the methods of science to navigate the treacherous straits between “know nothing” skepticism and “anything goes” credulity”.

And what is ‘climate’ and how does it differ from ‘weather’?

Weather is the state of the atmosphere at any given moment to the extent that it is hot or cold, wet or dry, calm or stormy, clear or cloudy. The way the concept is used in daily life refers to day-to-day temperature and precipitation activity. By contrast climate is the term for the average atmospheric conditions over longer periods of time. The difference between the two creates major confusion for many.  “How the heck can it be global warming when we’re having record snowfalls in eastern Canada?

Which leads us to the obvious next question – what is the evidence for climate change?

Lots of prestigious institutions keep honest meteorological data and report their findings. At the national level, Environment Canada reports that the national average temperature for 2010 was 3.0°C above normal, which makes it the warmest year on record since nationwide records began in 1948. The previous warmest year was 1998, 2.5°C above normal. Four Canadian climate regions (Arctic Tundra, Arctic Mountains and Fiords, North-eastern Forest and Atlantic Canada) experienced their warmest year on record in 2010, and for six other climate regions the year was amongst 10 warmest recorded.  Southern Alberta and Saskatchewan were the only parts of the country with close to normal temperatures. Environment Canada’s national temperature departures table shows that of the ten warmest years, four have occurred within the last decade, and 13 of the last 20 years are listed among the 20 warmest.

At the international level, the Climatic Research Unit of the University of East Anglia has global land and marine surface temperature data dating back to 1850. The Unit reports that the years 2003, 2005 and 2010 have been the warmest on record. The mean global temperature has risen by 0.8°C over the past century. The World Meteorological Organization reports that the ten warmest years on record have all occurred since 1998.

The U.S. Environmental Protection Agency has carefully summarized all the salient indicators of climate change occurring within the past century. These include:

  • heat waves – the frequency of heat waves in the U.S. has risen steadily since 1970, and the area within the U.S. experi­encing heat waves has increased;
  • average precipitation has increased since 1901 at an average rate of more than 6 percent per century in the U.S. and nearly 2 percent per century worldwide;
  • heavy precipitation – in recent years, a higher percentage of precipitation in the U.S. has come in the form of intense single-day events; eight of the top 10 years for extreme one-day precipitation events have occurred since 1990;
  • tropical cyclone intensity in the Atlantic Ocean, Caribbean, and Gulf of Mexico has risen noticeably over the past 20 years; six of the 10 most active hurricane seasons have occurred since the mid-1990s; this increase is closely related to variations in sea surface temperature in the tropical Atlantic;
  • Arctic sea ice – September 2007 had the lowest ice coverage of any year on record, followed by 2008 and 2009; the extent of Arctic sea ice in 2009 was 24 percent below the 1979 to 2000 historical average;
  • glaciers around the world have generally shrunk since the 1960s, and the rate at which glaciers are melting has accelerated over the last decade; overall, glaciers worldwide have lost more than 8000 km3 of water since 1960;
  • lakes in the northern U.S. are freezing later and thawing earlier than they did in the 1800s and early 1900s; the length of time that lakes stay frozen has decreased at an average rate of one to two days per decade;
  • snow cover over North America has generally decreased since 1972 (although there has been much year-to-year variability); snow covered an average of 8 million km2 of North America during the years 2000 to 2008, compared with 8.8 million km2 during the 1970s.

So we honest sceptics have no issue with the evidence for global warming. Its incontrovertible. Not even Sarah Palin could refudiate it.

What about the evidence for anthropogenic inputs to global climate change? In other words, to what extent are human activities, specifically the emission of carbon dioxide, methane and other greenhouse gases, responsible for the global warming observed to date?

Total global green house gas emissions (expressed as carbon dioxide equivalents) are nearing 30 billion metric tonnes per year. As a result mean global atmospheric carbon dioxide concentration has gone from about 280 parts per million during pre-industrial times to more than 380 parts per million today. Earlier CO2 data were collected from ice-cores in eastern Antarctica and have been the subject of dispute by so-called climate sceptics, but the modern-day data come from state of the art instrumentation on Mauna Loa in Hawaii and are incontestable. From 1990 to 2008 the radiative forcing of all the greenhouse gases in the Earth’s atmosphere increased by about 26 percent, the rise in carbon dioxide concentrations accounting for approximately 80 percent of this increase.

It turns out that atmospheric CO2 is not homogeneous. Some of it contains carbon-12, the rest carbon-13 (one more neutron per atom than carbon-12). Green plants prefer carbon-12 in their photosynthetic reactions. When fossil fuels, which are derived from ancient plants, are burned, the carbon-12 is release into the atmosphere. Over time the continuous carbon-12 emissions change the atmospheric proportion of carbon-13 to carbon-12, and this proportion can be measured in corals and sea sponges. So not only have background levels of CO2 increased over the past century, they are directly linked to fossil fuel burning. And we honest sceptics are still cool with the concept.

Next question – is the extra anthropogenically-derived CO2 responsible for the observed warming trend? The so-called ‘greenhouse’ effect of CO2 is well-known, and can easily be measured in a laboratory. But it has also been measured globally over the past 30 years by satellite-mounted infrared sensors and found to be significant. Moreover, the amounts of global atmospheric downward long wave radiation over land surfaces measured from 1973 to 2008 have been examined and found to be significant in contributing to the global greenhouse effect.

The U.S. Protection Agency’s summary includes some biological indicators of long-term climate change in the U.S.:

  • the average length of the growing season in the lower 48 states has increased by about two weeks since the beginning of the 20th century; a particularly large and steady increase having occurred over the last 30 years;  the observed changes reflect earlier spring warming as well as later arrival of fall frosts, and the length of the growing season has increased more rapidly in the west than in the east.
  • plant hardiness zones have shifted northward since 1990, reflecting higher winter temperatures in most parts of the country; large portions of several states have warmed by at least one hardiness zone;
  • leaf and bloom dates of lilacs and honeysuck­les in the lower 48 states are now a few days earlier than in 1900s;
  • bird wintering ranges have shifted northward by an average of 56 km since 1966, with a few species shifting by several hundred kilometres; many bird species have moved their wintering grounds farther from the coast, consistent with rising inland temperatures.

So there you have it. Take all the scientific evidence available and it would be difficult indeed not to concur with the 97 out of 100 climate experts who think that humans are indeed causing global warming.

So, if the evidence satisfies the honest sceptics amongst us, i.e. those who take the time to seek out and evaluate the evidence and try their level best to come to an honest and defensible conclusions, why then is there a substantial body of opinion which holds countervailing views, i.e. that there is no warming or climate change (its all just natural variation), or that there is change but we ain’t responsible (its Mother Nature’s fault)?

That would be the subject of future postings from the Elders. It opens up the opportunity for some innovative taxonomy of climate change personalities, but I’ll leave the naming to others!

1 2