Ticking Time Bomb

Home arrow Blog
A blog of all sections with no images
The worse gas yet PDF Print E-mail
User Rating: / 0

A new greenhouse gas that is 7,000 times more powerful than carbon dioxide at warming the Earth has been discovered by researchers in Toronto.

 The newly discovered gas, perfluorotributylamine (PFTBA), has been in use by the electrical industry since the mid-20th century.

 The chemical, that does not occur naturally, breaks all records for potential impacts on the climate, said the researchers at the University of Toronto's department of chemistry.

 "We claim that PFTBA has the highest radiative efficiency of any molecule detected in the atmosphere to date," said Angela Hong, one of the co-authors.

 The study, published in the journal Geophysical Research Letters, found PFTBA was 7,100 times more powerful at warming the Earth over a 100-year time span than CO2.

 Concentrations of PFTBA in the atmosphere are low – 0.18 parts per trillion in the Toronto area – compared to 400 parts per million for carbon dioxide. So PFTBA does not in any way displace the burning of fossil fuels such as oil and coal as the main drivers of climate change.

 Dr Drew Shindell, a climatologist at Nasa's Goddard Institute for Space Studies, said:

"This is a warning to us that this gas could have a very very large impact on climate change – if there were a lot of it. Since there is not a lot of it now, we don't have to worry about it at present, but we have to make sure it doesn't grow and become a very large contributor to global warming.".

 He said a number of recent studies had drawn attention to other potential new greenhouse gases which, like PFTBA, pack a lot of warming potential in each molecule but are not very prevalent in the atmosphere.

 Such studies were a warning against increasing uses of such compounds without first understanding their impact on climate change, he added.

 "From a climate change perspective, individually, PFTBA's atmospheric concentration does not significantly alert the phenomenon of climate change," Hong said. "Still the biggest culprit is CO2 from fossil fuel emissions."

 But PFTBA is long-lived. The Toronot researchers estimated PFTBA remains in the atmosphere for about 500 years, and unlike carbon dioxide, that is taken up by forests and oceans, there are no known natural "sinks" on Earth to absorb it.

 "It is so much less than carbon dioxide, but the important thing is on a per molecule basis, it is very very effective in interacting with heat from the Earth," she said. "Individually each molecule is able to affect the climate potentially and because its lifetime is so long it also has a long-lasting effect."

 Hong said the discovery of PFTBA and its warming potential raises questions about the climate impacts of other chemicals used in industrial processes.

 PFTBA has been in use since the mid-20th century for various applications in electrical equipment, such as transistors and capacitors. The researchers said it was unclear how widespread its use was today.

 It belongs to an entire class of chemicals used for industrial applications whose effects on the atmosphere remain unknown.

 "PFTBA is just one example of an industrial chemical that is produced but there are no policies that control its production, use or emission," Hong said. "It is not being regulated by any type of climate policy."

 
Global warming warning PDF Print E-mail
User Rating: / 0

A landmark report says scientists are 95% certain that humans are the "dominant cause" of global warming since the 1950's the UN's climate panel details the physical evidence behind climate change.

On the ground, in the air, in the oceans, global warming is "unequivocal", it explained.

It adds that a pause in warming over the past 15 years is too short to reflect long-term trends.

The panel warns that continued emissions of greenhouse gases will cause further warming and changes in all aspects of the climate system. Stocker says to limit climate change will require substantial and sustained reductions of greenhouse gases. Finishes to applause.Very simple statement says Dr stocker, human influence on the climate system is clear, adopted by 110 governments by consensus not about headlines, but scientific assessment says Dr Thomas Stocker, wg1 co chair Wg1 co chair Thomas Stocker giving presentation on the new summary. Says he's slept only six hours in the last two days!

To contain these changes will require "substantial and sustained reductions of greenhouse gas emissions".

After a week of intense negotiations in the Swedish capital, the summary for policymakers on the physical science of global warming has finally been released.

The first part of an IPCC trilogy, due over the next 12 months, this dense, 36-page document is considered the most comprehensive statement on our understanding of the mechanics of a warming planet.

It states baldly that, since the 1950s, many of the observed changes in the climate system are "unprecedented over decades to millennia".

Each of the last three decades has been successively warmer at the Earth's surface, and warmer than any period since 1850, and probably warmer than any time in the past 1,400 years.

"Our assessment of the science finds that the atmosphere and ocean have warmed, the amount of snow and ice has diminished, the global mean sea level has risen and that concentrations of greenhouse gases have increased," said Qin Dahe, co-chair of IPCC working group one, who produced the report.

Speaking at a news conference in the Swedish capital, Prof Thomas Stocker, another co-chair, said that climate change "challenges the two primary resources of humans and ecosystems, land and water. In short, it threatens our planet, our only home".

Since 1950, the report's authors say, humanity is clearly responsible for more than half of the observed increase in temperatures.

But a so-called pause in the increase in temperatures in the period since 1998 is downplayed in the report. The scientists point out that this period began with a very hot El Nino year.

What is the IPCC?

In its own words, the IPCC is there "to provide the world with a clear scientific view on the current state of knowledge in climate change and its potential environmental and socio-economic impacts".

The offspring of two UN bodies, the World Meteorological Organization and the United Nations Environment Programme, it has issued four heavyweight assessment reports to date on the state of the climate.

These are commissioned by the governments of 195 countries, essentially the entire world. These reports are critical in informing the climate policies adopted by these governments.

The IPCC itself is a small organisation, run from Geneva with a full time staff of 12. All the scientists who are involved with it do so on a voluntary basis.

"Trends based on short records are very sensitive to the beginning and end dates and do not in general reflect long-term climate trends," the report says.

Prof Stocker, added: "I'm afraid there is not a lot of public literature that allows us to delve deeper at the required depth of this emerging scientific question.

"For example, there are not sufficient observations of the uptake of heat, particularly into the deep ocean, that would be one of the possible mechanisms to explain this warming hiatus."

"Likewise we have insufficient data to adequately assess the forcing over the last 10-15 years to establish a relationship between the causes of the warming."

However, the report does alter a key figure from the 2007 study. The temperature range given for a doubling of CO2 in the atmosphere, called equilibrium climate sensitivity, was 2.0C to 4.5C in that report.

In the latest document, the range has been changed to 1.5C to 4.5C. The scientists say this reflects improved understanding, better temperature records and new estimates for the factors driving up temperatures.

In the summary for policymakers, the scientists say that sea level rise will proceed at a faster rate than we have experienced over the past 40 years. Waters are expected to rise, the document says, by between 26cm (at the low end) and 82cm (at the high end), depending on the greenhouse emissions path this century.

The scientists say ocean warming dominates the increase in energy stored in the climate system, accounting for 90% of energy accumulated between 1971 and 2010.

For the future, the report states that warming is projected to continue under all scenarios and is likely to exceed 1.5C by 2100.

"We have found in our assessment analysing these model simulation[s] that global surface temperature change for the end of the 21st Century is likely to exceed 1.5 degrees Celsius relative to 1850 for all scenarios. This is a statement that is adopted by the governments of the world," Prof Stocker told reporters.

Prof Sir Brian Hoskins, from Imperial College London, "We are performing a very dangerous experiment with our planet, and I don't want my grandchildren to suffer the consequences of that experiment."

 
Fusion future plans PDF Print E-mail
User Rating: / 0

The world's largest bid to harness the power of fusion has entered a "critical" phase in southern France.

The Iter project at Cadarache in Provence is receiving the first of about one million components for its experimental reactor.

Dogged by massive cost rises and long delays, building work is currently nearly two years behind schedule.

The construction of the key building has even been altered to allow for the late delivery of key components.

"We're not hiding anything - it's incredibly frustrating," David Campbell, a deputy director, told BBC News.

"Now we're doing everything we can to recover as much time as possible.

"The project is inspiring enough to give you the energy to carry on - we'd all like to see fusion energy as soon as possible."

 

Fusion facts

 
  • Fusion is the process that powers the stars including the Sun
  • One litre of water contains enough deuterium, when fused with tritium, to produce the equivalent energy of 500 litres of petrol
  • A 1,500MW fusion power station would consume about 600g of tritium and 400g of deuterium a day
  • The first large-scale use of fusion was by the US military with the detonation of Ivy Mike, a hydrogen bomb, on 1 November 1952.
  • Iter's design involves a tokamak, the Russian word for a ring-shaped magnetic chamber
  • The magnetic field is designed to contain 100 million degree plasma, the temperature required for the fusion process
  • The US, while supporting Iter as a partner, is also funding the National Ignition Facility, which uses lasers to heat and compress hydrogen to the point of fusion
  • South Korea, another Iter partner, is investing $941m in a fusion technology demonstrator, K-DEMO, which could be the first to generate Grid power
  • Critics object to further research into nuclear power and question the likely costs of commercial operations

 

After initial design problems and early difficulties co-ordinating this unique international project, there is now more confidence about the timetable.

Since the 1950s, fusion has offered the dream of almost limitless energy - copying the fireball process that powers the Sun - fuelled by two readily available forms of hydrogen.  

The attraction is a combination of cheap fuel, relatively little radioactive waste and no emissions of greenhouse gases. 

But the technical challenges of not only handling such an extreme process but also designing ways of extracting energy from it have always been immense. 

In fact, fusion has long been described as so difficult to achieve that it's always been touted as being "30 years away".

Now the Iter reactor will put that to the test. Known as a "tokamak", it is based on the design of Jet, a European pilot project at Culham in Oxfordshire.

It will involve creating a plasma of superheated gas reaching temperatures of more than 200 million C - conditions hot enough to force deuterium and tritium atoms to fuse together and release energy.  

The whole process will take place inside a giant magnetic field in the shape of a ring - the only way such extreme heat can be contained.  

The plant at JET has managed to achieve fusion reactions in very short bursts but required the use of more power than it was able to produce.  

The reactor at Iter is on a much larger scale and is designed to generate 10 times more power - 500 MW - than it will consume.

France's fusion reactor would work like the sun

Iter brings together the scientific and political weight of governments representing more than half the world's population - including the European Union, which is supporting nearly half the cost of the project, together with China, India, Japan, Russia, South Korea and the United States.

Contributions are mainly "in kind" rather than in cash with, for example, the EU providing all the buildings and infrastructure - which is why an exact figure for cost is not available. The rough overall budget is described as £13bn or 15bn euros.

But the novel structure of Iter has itself caused friction and delays, especially in the early days.  

Each partner first had to set up a domestic "agency" to handle the procurement of components within each member country, and there have been complications with import duties and taxes.  

Further delay crept in with disputes over access to manufacturing sites in partner countries. Because each part has to meet extremely high specifications, inspectors from Iter and the French nuclear authorities have had to negotiate visits to companies not used to outside scrutiny.

The result is that although a timeline for the delivery of the key elements has been agreed, there's a recognition that more hold-ups are almost inevitable.   

The main building to house the tokamak has been adjusted to leave gaps in its sides so that late components can be added without too much disruption.

The route from the ports to the construction site has had to be improved to handle huge components weighing up to 600 tonnes, but this work too has been slower than hoped. A trial convoy originally scheduled for last January has slipped to this coming September.

Under an initial plan, it had once been hoped to achieve the first plasma by the middle of the last decade.

Then, after a redesign, a new deadline of November 2020 was set but that too is now in doubt. Managers say they are doubling shifts to accelerate the pace of construction. It's thought that even a start date during 2021 may be challenging.

The man in charge of coordinating the assembly of the reactor is Ken Blackler.

"We've now started for real," he told me. "Industrial manufacturing is now under way so the timescale is much more certain - many technical challenges have been solved.

"But Iter is incredibly complicated. The pieces are being made all around the world - they'll be shipped here.

"We'll have to orchestrate their arrival and build them step by step so everything will have to arrive in the right order - it's really a critical point."

Command and control

While one major concern is the arrival sequence of major components, another is that the components themselves are of sufficiently high quality for the system to function.

The 28 magnets that will create the field containing the plasma have to be machined to a very demanding level of accuracy. And each part must be structurally sound and then welded together to ensure a totally tight vacuum - without which the plasma cannot be maintained. A single fault or weakness could jeopardise the entire project. 

Assuming Iter does succeed in proving that fusion can produce more power than it consumes, the next step will be for the international partners to follow up with a technology demonstration project - a test-bed for the components and systems needed for a commercial reactor.  

Ironically, the greater the progress, the more apparent becomes the scale of the challenge of devising a fusion reactor that will be ready for market.

At a conference in Belgium last September, I asked a panel of experts when the first commercially-available fusion reactor might generate power for the grid. 

A few said that could happen within 40 years but most said it would take another 50 or even 60 years. The fusion dream has never been worked on so vigorously. But turning it into reality is much more than 30 years away.

 
Fusion still the magic bullet PDF Print E-mail
User Rating: / 2

In the era of global climate change, and concerns about humanity's long-term reliance on fossil fuels, many think the solution lies in alternative sources of energy, including nuclear power. All our nuclear power plants are based on fission: splitting heavy atoms into lighter components in a controlled fashion. Though fission is safe when all goes well, the fuel is radioactive, waste disposal can be problematic, and as the Fukushima disaster showed there is a high cost to accidents.

Nuclear fusion is in principle cleaner and comes from a cheaper, more abundant fuel source: an isotope of hydrogen called deuterium can be extracted from water and only helium is produced as waste. From The Matrix to SimCity 2000 to political dreamers, fusion has often been seen as an inevitability for society. However, despite decades of work nuclear fusion remains a dream. As the joke goes fusion is the power of the future – and always will be.

That's not because creating a sustained fusion reaction – in which more energy is produced than is required to start and maintain the process – is impossible or even terribly hard to achieve (at least by high-energy particle physics standards). The most infamous example of a fusion reaction is the hydrogen bomb, which sacrificed control and safety for the sake of violence and death. Fusion reactors obviously need to have more stringent requirements.

Cosmic collider

To see what's needed to create a sustained reaction, let's look to the best-known fusion reactors of all: stars. In the core of a star like the Sun, strong gravitational pressure forces together hydrogen plasma – an equal mixture of protons and electrons. Extreme conditions of 15 million-degree-temperatures and high pressures mean that protons have enough energy to overcome their mutual repulsion for each other, allowing the attractive forces to kick in. When protons fuse together they are converted into neutrons and release a lot of energy.

Bluntly stated, we can't recreate those conditions, even if we wanted to. Stars have a sufficiently large mass to contain the hydrogen plasma by the force of gravity alone, but we don't have that option, so physicists have to confine plasma using electromagnetism instead. Researchers can also start with deuterium or tritium plasma instead of hydrogen to lower the energy required to start fusion. (Tritium is a hydrogen isotope consisting of a proton and two neutrons; unlike normal hydrogen and deuterium, it's unstable and therefore harder to keep around.) However, the temperature and pressure still needs to be high, so it requires a larger energy input than fusion liberates, which defeats the purpose.

Part of the difficulty lies in the nature of plasma. If you put a normal neutral gas, such as oxygen, in a container you can increase both pressure and temperature by compressing it. Plasma, on the other hand, consists of charged particles at sufficiently high temperatures to melt the container walls. Also, without maintaining conditions carefully, the electrons tend to reunite with protons, creating a neutral hydrogen gas that's useless for fusion; it's imperative that the container trapping the plasma contain no gas, for similar reasons.

Some hope lies in using elements other than hydrogen, as these contain more than one proton per nucleus. That increases the electric repulsion and in some cases can make the energy barrier to fusion even higher. While some fusion reactions involving helium, lithium, or boron are areas of active research, a major problem is that these materials are far rarer on Earth than hydrogen.

Hot doughnuts

All is not lost, however. Researchers are pursuing several possible solutions to the fusion problem, mostly based on clever methods for confining or compressing deuterium. The oldest of these is magnetic confinement, in which strong magnetic fields act as the “walls” of a container.

The best-known incarnation of this is the tokamak, first built in the Soviet Union in the 1950s. In a tokamak, deuterium and tritium are injected into a doughnut, or torus-shaped chamber, and heated to the point at which its electrons break free. Magnetic fields running along the centre contain and squeeze the plasma, and the high temperatures within the plasma then facilitate fusion. However, even the best tokamak designs – including the Joint European Torus (JET) in the United Kingdom and the Tokamak Fusion Test Reactor (TFTR) in the United States – haven't broken the barrier of making more energy than is required to keep the plasma hot and trapped. Much hope is being placed on Iter (International Thermonuclear Experimental Reactor), a €15 billion ($22 billion) project designed to build the world’s largest tokamak in the south of France. Iter is expected to commence operation at the end of this decade, with the first proper fusion tests scheduled for 2028. But it has been dogged by logistical issues – last month the US Senate spending panel voted to stop contributions to the project.

Many think another method called inertial confinement provides the best hope for a workable fusion reactor. This uses bombardment by high-energy photons – X-rays – to confine and compress a pellet of hydrogen and its isotopes. Successive X-ray pulses emanate from a large number of lasers completely surrounding the pellet, doing the work of heating, ionising, and compressing the hydrogen to the point where it can fuse. The biggest barrier to a working model lies in the X-ray lasers, which require a lot of energy to operate, but researchers at laboratories such as SLAC in the United States and the European X-ray Free Electron Laser are actively working to solve the problem.

The Z Machine at Sandia National Laboratory in the United States is a hybrid between magnetic and inertial confinement. Though the Z Machine itself is not a fusion reactor (and in fact is partly used for developing models for nuclear weapons), the powerful magnetic fields and X-rays it produces are part of a project in which a pellet of hydrogen is repeatedly bombarded with intense pulses of light to compress it.

There are other methods for confinement and compression, and more will no doubt be developed. Another path is aimed at lowering the energy barrier to deuterium fusion: forming molecules using muons instead of electrons. Muons are the heavier, unstable cousins of electrons. (We all have family members like that, I think.) Their presence in a molecule of two deuterium atoms brings the nuclei much closer together, consequently making fusion much more likely. However, once the energy cost of making muons and creating molecules using them is added in, muon-catalysed fusion no longer becomes cost-effective.

So the question is will we find a way around any of these problems? As with many things, it depends on human scientific ingenuity, and the practical limits placed upon it. Fusion research is relatively expensive, so investing in it is an exercise in hope. More precisely it is hope that once a sustainable fusion reactor is invented, it will repay the investment many times over.

 
Arctic thaw causing concern PDF Print E-mail
User Rating: / 0

Rapid thawing of the Arctic could trigger a catastrophic "economic timebomb" which would cost trillions of dollars and undermine the global financial system, say a group of economists and polar scientists.

Governments and industry have expected the widespread warming of the Arctic region in the past 20 years to be an economic boon, allowing the exploitation of new gas and oilfields and enabling shipping to travel faster between Europe and Asia. But the release of a single giant "pulse" of methane from thawing Arctic permafrost beneath the East Siberian sea "could come with a $60tn [£39tn] global price tag", according to the researchers who have for the first time quantified the effects on the global economy.

Even the slow emission of a much smaller proportion of the vast quantities of methane locked up in the Arctic permafrost and offshore waters could trigger catastrophic climate change and "steep" economic losses, they say.

The Arctic sea ice, which largely melts and reforms each year, is declining at an unprecedented rate. In 2012, it collapsed to under 3.5m sqkm by mid September, just 40% of its usual extent in the 1970s. Because the ice is also losing its thickness, some scientists expect the Arctic ocean to be largely free of summer ice by 2020.

The growing fear is that as the ice retreats, the warming of the sea water will allow offshore permafrost to release ever greater quantities of methane. A giant reservoir of the greenhouse gas, in the form of gas hydrates on the East Siberian Arctic Shelf (ESAS), could be emitted, either slowly over 50 years or catastrophically fast over a shorter time frame, say the researchers.

The ramifications of vanishing ice will also be felt far from the poles, they say because the region is pivotal to the functioning of Earth systems, such as oceans and climate. "The imminent disappearance of the summer sea ice in the Arctic will have enormous implications for both the acceleration of climate change, and the release of methane from off-shore waters which are now able to warm up in the summer," said Prof Peter Wadhams, head of the Polar ocean physics group at Cambridge University and one of the authors of the paper published in the journal Nature.

"This massive methane boost will have major implications for global economies and societies. Much of those costs would be borne by developing countries in the form of extreme weather, flooding and impacts on health and agricultural production," he said.

According to the authors, who using the Stern review, calculated that 80% of the extra impacts by value will occur in the poorer economies of Africa, Asia and South America. "Inundation of low-lying areas, extreme heat stress, droughts and storms are all magnified by the extra methane emissions," they authors write. They argue that global economic bodies have not taken into account the risks of rapid ice melt and that the only economic downside to the warming of the Arctic they have identified so far has been the possible risk of oil spills.

But, they say, economists are missing the big picture. "Neither the World Economic Forum nor the International Monetary Fund currently recognise the economic danger of Arctic change. [They must] pay much more attention to this invisible time-bomb. The impacts of just one [giant "pulse" of methane] approaches the $70-tn value of the world economy in 2012", said Prof Gail Whiteman, at the Rotterdam School of Management and another author.

The Nature report comes as global shipping companies prepare to send a record number of vessels across the north of Russia later in 2013, slashing miles travelled between Asia and Europe by over 35% and cutting costs up to 40%.

According to Russian authorities, 218 ships from Korea, China, Japan, Norway, Germany and elsewhere have so far applied for permission to follow the "Northern sea route" (NSR) this year. This route uses the Bering Strait between Siberia and Alaska and is only open for a few months each year with an icebreaker.

But following 2012's record collapse of the Arctic sea ice, shipping companies are gaining confidence to use the route. In 2012, only 46 ships sailed its entire length from the Atlantic to Pacific oceans and in 2011 only four. The route can save even medium-sized bulk carrier 10-15 days and hundreds of tonnes of bunker fuel on a journey between northern Norway and China.

Satellite data collated from the US National snow and ice data centre in Boulder, Colorado this week showed ice loss now accelerating and, at 8.2m sqkm (3.2m square miles) approaching the same extent as during last year's record melt. Over 130,000 sqkm of sea ice melted between July 1 and 15. "Compared to the 1981 to 2010 average, ice extent on July 15 was 1.06m sqkm (409,000 square miles) below average," said a spokesman.

 
Deep sea mining for energy PDF Print E-mail
User Rating: / 1
As the great research ship Chikyu left Shimizu in January to mine the explosive ice beneath the Philippine Sea, chances are good that not one of the scientists aboard realized they might be closing the door on Winston Churchill’s world. Their lack of knowledge is unsurprising; beyond the ranks of petroleum-industry historians, Churchill’s outsize role in the history of energy is insufficiently appreciated.

Winston Leonard Spencer Churchill was appointed First Lord of the Admiralty in 1911. With characteristic vigor and verve, he set about modernizing the Royal Navy, jewel of the empire. The revamped fleet, he proclaimed, should be fueled with oil, rather than coal—a decision that continues to reverberate in the present. Burning a pound of fuel oil produces about twice as much energy as burning a pound of coal. Because of this greater energy density, oil could push ships faster and farther than coal could.

Churchill’s proposal led to emphatic dispute. The United Kingdom had lots of coal but next to no oil. At the time, the United States produced almost two-thirds of the world’s petroleum; Russia produced another fifth. Both were allies of Great Britain. Nonetheless, Whitehall was uneasy about the prospect of the Navy’s falling under the thumb of foreign entities, even if friendly. The solution, Churchill told Parliament in 1913, was for Britons to become “the owners, or at any rate, the controllers at the source of at least a proportion of the supply of natural oil which we require.” Spurred by the Admiralty, the U.K. soon bought 51 percent of what is now British Petroleum, which had rights to oil “at the source”: Iran (then known as Persia). The concessions’ terms were so unpopular in Iran that they helped spark a revolution. London worked to suppress it. Then, to prevent further disruptions, Britain enmeshed itself ever more deeply in the Middle East, working to install new shahs in Iran and carve Iraq out of the collapsing Ottoman Empire.

Churchill fired the starting gun, but all of the Western powers joined the race to control Middle Eastern oil. Britain clawed past France, Germany, and the Netherlands, only to be overtaken by the United States, which secured oil concessions in Turkey, Iraq, Bahrain, Kuwait, and Saudi Arabia. The struggle created a long-lasting intercontinental snarl of need and resentment. Even as oil-consuming nations intervened in the affairs of oil-producing nations, they seethed at their powerlessness; oil producers exacted huge sums from oil consumers but chafed at having to submit to them. Decades of turmoil—oil shocks in 1973 and 1979, failed programs for “energy independence,” two wars in Iraq—have left unchanged this fundamental, Churchillian dynamic, a toxic mash of anger and dependence that often seems as basic to global relations as the rotation of the sun.

All of this was called into question by the voyage of the Chikyu (“Earth”), a $540 million Japanese deep-sea drilling vessel that looks like a billionaire’s yacht with a 30-story oil derrick screwed into its back. The Chikyu, a floating barrage of superlatives, is the biggest, glitziest, most sophisticated research vessel ever constructed, and surely the only one with a landing pad for a 30-person helicopter. The central derrick houses an enormous floating drill with a six-mile “string” that has let the Chikyu delve deeper beneath the ocean floor than any other ship.

The Chikyu, which first set out in 2005, was initially intended to probe earthquake-generating zones in the planet’s mantle, a subject of obvious interest to seismically unstable Japan. Its present undertaking was, if possible, of even greater importance: trying to develop an energy source that could free not just Japan but much of the world from the dependence on Middle Eastern oil that has bedeviled politicians since Churchill’s day.

In the 1970s, geologists discovered crystalline natural gas—methane hydrate, in the jargon—beneath the seafloor. Stored mostly in broad, shallow layers on continental margins, methane hydrate exists in immense quantities; by some estimates, it is twice as abundant as all other fossil fuels combined. Despite its plenitude, gas hydrate was long subject to petroleum-industry skepticism. These deposits—water molecules laced into frigid cages that trap “guest molecules” of natural gas—are strikingly unlike conventional energy reserves. Ice you can set on fire! Who could take it seriously? But as petroleum prices soared, undersea-drilling technology improved, and geological surveys accumulated, interest rose around the world. The U.S. Department of Energy has been funding a methane-hydrate research program since 1982.

  

Nowhere has the interest been more serious than Japan. Unlike Britain and the United States, the Japanese failed to become “the owners, or at any rate, the controllers” of any significant amount of oil. (Not that Tokyo didn’t try: it bombed Pearl Harbor mainly to prevent the U.S. from blocking its attempted conquest of the oil-rich Dutch East Indies.) Today, Churchill’s nightmare has come true for Japan: it is a military and industrial power almost wholly dependent on foreign energy. It is the world’s third-biggest net importer of crude oil, the second-biggest importer of coal, and the biggest importer of liquefied natural gas. Not once has a Japanese politician expressed happiness at this state of affairs.

Japan’s methane-hydrate program began in 1995. Its scientists quickly focused on the Nankai Trough, about 200 miles southwest of Tokyo, an undersea earthquake zone where two pieces of the Earth’s crust jostle each other. Step by step, year by year, a state-owned enterprise now called the Japan Oil, Gas, and Metals National Corporation (JOGMEC) dug test wells, made measurements, and obtained samples of the hydrate deposits: 130-foot layers of sand and silt, loosely held together by methane-rich ice. The work was careful, slow, orderly, painstakingly analytical—the kind of process that seems intended to snuff out excited newspaper headlines. But it progressed with the same remorselessness that in the 1960s and ’70s had transformed offshore oil wells from Waterworld-style exoticisms to mainstays of the world economy.

In January, 18 years after the Japanese program began, the Chikyu left the Port of Shimizu, midway up the main island’s eastern coastline, to begin a “production” test—an attempt to harvest usefully large volumes of gas, rather than laboratory samples. Many questions remained to be answered, the project director, Koji Yamamoto, told me before the launch. JOGMEC hadn’t figured out the best way to mine hydrate, or how to ship the resultant natural gas to shore. Costs needed to be brought down. “It will not be ready for 10 years,” Yamamoto said. “But I believe it will be ready.” What would happen then, he allowed, would be “interesting.”

Already the petroleum industry has been convulsed by hydraulic fracturing, or “fracking”—a technique for shooting water mixed with sand and chemicals into rock, splitting it open, and releasing previously inaccessible oil, referred to as “tight oil.” Still more important, fracking releases natural gas, which, when yielded from shale, is known as shale gas. (Petroleum is a grab-bag term for all nonsolid hydrocarbon resources—oil of various types, natural gas, propane, oil precursors, and so on—that companies draw from beneath the Earth’s surface. The stuff that catches fire around stove burners is known by a more precise term, natural gas, referring to methane, a colorless, odorless gas that has the same chemical makeup no matter what the source—ordinary petroleum wells, shale beds, or methane hydrate.) Fracking has been attacked as an environmental menace to underground water supplies, and may eventually be greatly restricted. But it has also unleashed so much petroleum in North America that the International Energy Agency, a Paris-based consortium of energy-consuming nations, predicted in November that by 2035, the United States will become “all but self-sufficient in net terms.” If the Chikyu researchers are successful, methane hydrate could have similar effects in Japan. And not just in Japan: China, India, Korea, Taiwan, and Norway are looking to unlock these crystal cages, as are Canada and the United States.

Not everyone thinks JOGMEC will succeed. But methane hydrate is being developed in much the same methodical way that shale gas was developed before it, except by a bigger, more international group of researchers. Shale gas, too, was subject to skepticism wide and loud. The egg on naysayers’ faces suggests that it would be foolish to ignore the prospects for methane hydrate—and more foolish still not to consider the potential consequences.

If methane hydrate allows much of the world to switch from oil to gas, the conversion would undermine governments that depend on oil revenues, especially petro-autocracies like Russia, Iran, Venezuela, Iraq, Kuwait, and Saudi Arabia. Unless oil states are exceptionally well run, a gush of petroleum revenues can actually weaken their economies by crowding out other business. Worse, most oil nations are so corrupt that social scientists argue over whether there is an inherent bond—a “resource curse”—between big petroleum deposits and political malfeasance. It seems safe to say that few Americans would be upset if a plunge in demand eliminated these countries’ hold over the U.S. economy. But those same people might not relish the global instability—a belt of financial and political turmoil from Venezuela to Turkmenistan—that their collapse could well unleash.

 
<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>

Results 21 - 30 of 491
© 2019 The Environmentalist
Joomla! is Free Software released under the GNU/GPL License.