TTIP: A Much Needed Platform for Renewed Prosperity or A Threat to Sovereignty and Democracy?

What is TTIP?

TTIP stands for the Transatlantic Trade and Investment Partnership, and represents the potential free trade agreement currently being negotiated between the United States (US) and the European Union (EU). An independent study from the Centre for Economic Policy Research has estimated that the deal could boost the EU economy by up to £85 billion annually and the US economy could gain £68 billion. In combination, growth could increase by 0.5% through the agreement alone, which at face value should be most welcome at a time when both economies are reliant on central bank support and new sources of growth are much needed.

TTIP’s core purpose is to build further on the net benefits achieved by various free trade agreements over the years, responding positively to the threats and opportunities presented by globalization, which has been one of the most significant macro-trends that has impacted the world economy and the world’s people over the last fifty plus years. TTIP is complex in nature, but its primary purpose is to promote economic growth and job creation in the US and EU by cutting trade tariffs, reducing red tape barriers, and expanding market access for the US and EU, on both sides of the Atlantic.

TTIP is ambitious in its scope and will have a significant impact of many EU and US industries such as Energy, Finance, Environmental, as well as Auto Manufacturing and the Food & Drink industry. TTIP, if agreed in broadly its current form, will provide access for EU and US Firms to markets made up of some 820 million consumers, representing a quarter of all Global trade and over 60% of global GDP.

Governments appear strongly supportive of TTIP to date?

The Obama administration state that the passing of TTIP will increase foreign direct investment, expand market access for US goods and services, cut regulation, promote greater transparency, enhance global co-operation and tackle non-tariff barriers that are anti-free trade. The combined impact of these factors will act as a positive economic stimulus for growth.

The European Commission, which represents the EU executive, similarly believe that this free trade agreement with the US is essential in helping kick start the EU back to growth. In terms of trade, they also believe that the deal will help create links to emerging economics and markets, which will help the EU, influence, and respond more positively, to global affairs. EU officials have stated that they hope to achieve three key objectives through their negotiations. Firstly, they are seeking to open up the EU economy, as the largest single market in the world, to US firms; secondly they aim to cut red tape and other regulatory burdens for firms seeking to export, and finally they plan to set new, more accommodating, trade rules to make it easier to export, import and invest overseas.

Member states within the EU, such as the UK, argue that through this agreement the cost of meeting different regulations and standards will be reduced, creating a far more integrated transatlantic marketplace. This, in turn, will create real benefits for consumers through cheaper goods and increased job growth and opportunities, while helping achieve higher levels of health and safety at work. Furthermore, the UK Government believes that the implementation of TTIP could add £10 billion annually to the UK economy.

How is the deal being Negotiated?

Despite TTIP being at the top of US and EU Administrations trade agendas, both proponents as well as sceptics have extensively criticised the process by which the agreement is being negotiated. Currently, the EU Trade Commissioner Cecilia Malmstrom, who represents all EU member states, heads the trade talks as the negotiations rotate between Brussels and Washington. In parallel, the European Commission consults on the negotiation process with member states through a trade policy committee.

Many argue, that as a consequence, the European public is being deliberately excluded from the TTIP negotiations as unelected officials update National Governments, who represent the democratic will of the people, on the process of negotiations, only after the event. In the eyes of many Eurosceptics this provides a further illustration of one of the EU’s greatest problems – the so-called democratic deficit – between the Commission and the peoples of Europe.

In the US, TTIP has also created major concerns. Earlier this month President Obama was defeated by his supposed Democrat colleagues in Congress, who voted against legislation that would give him fast track authority to negotiate trade deals, such as TTIP as well as the Trans-Pacific Partnership (TPP) another potential trade deal with 11 Asia-Pacific countries. The Democrats have long been concerned about the impact of globalisation on the jobs and incomes of Middle America, and they fear deals like TTP and TTIP will make this yet worse, by removing protection for America workers, creating a race to the bottom on wages as Asian and EU Firms with new access to US markets place profits before people.

With concerns on both sides, TTIP according to Former EU Trade Commissioner Karel De Gucht, may not be agreed by the proposed deadline in 2015, and could face further delay due to the 2016 US Presidential Election. Accordingly, there is a real possibility, that TTIP may not be implemented by the time President Obama leaves the White House at the end of his term.

Not all is plain sailing in the EU either – the EU Arguments against TTIP?

Since the inception of TTIP negotiations in 2013, negotiations have made progress, but have faced much dither and delay due to the divided opinions on the deal.

Many EU nations argue that it represents a threat to national sovereignty and democracy, as the agreement would allow multinational corporations to sue national governments if they are perceived to be acting out of line with the agreement and adversely impacting business performance and corporate profit potential. This is because of the Investor Dispute settlement mechanism proposed within TTIP. Critics view this mechanism as a significant threat to democracy, as it may compromise the democratic legitimacy of governments to act in the national interest of it citizens.

A Much Needed Platform for Renewed Prosperity

Image credit European Union 2015 via the daily Mirror

(MEP’S demonstrate against TTIP within the European Parliament)

In the UK, the centre of TTIP opposition is its potential affects on public services and institutions such as the NHS. Opponents believe that it is a recipe for back- door NHS privatisation. These claims are made despite repeated statements from Government officials, stating categorically that NHS services will continue to be managed by local NHS commissioners and that local democracy will continue to drive all major decisions on public policy.

With regards to other proposed measures many environmental campaigners argue that TTIP would reduce food and environmental standards as EU regulations would be brought into line with less strict US Laws. Such TTIP Opponents argue that TTIP will weaken European regulation on environmental safeguards in areas such as growth hormones, cosmetics, chemicals and genetically modified crops.

What next for TTIP?

EU and US officials are still in the process of negotiations, as the 9th round of negotiations took place in New York in April. Once the negotiations are finalised and an agreement is reached it will be sent to the executives and legislatures of the EU and US for approval.

Within the EU, in order for TTIP to be implemented in full, the deal must be approved by the European Council and ratified by the European Parliament. The TTIP agreement might also need to be passed by the 28 National Parliaments of EU Member states for it to proceed. Across the Atlantic, Congress must also approve the deal for it to be enacted. So despite negotiations progressing, TTIP has many hurdles to face before it can be implemented in full.

Striking the Balance on TTIP to date

In summary, TTIP offers the potential of becoming the largest free trade deal the world has ever seen, and given the fragile state of the US and EU economies, it can be viewed as a strongly positive force for the global economy, as a much needed stimulant to growth and job creation.

TTIP might also result in nation-states sacrificing elements of their sovereignty, as the deal enables the forces of globalization to ratchet up a further gear. The deal puts into question the role of multi-national corporations as political actors, and at the industry level creates the potential for ‘win-lose scenarios’ that comes with the promotion of free market economics and the harmonisation of common product/service standards.

Despite these real concerns TTIP overall supports economic growth, will boost job creation, deepen integration and lay the foundation for a competitive environment that will benefit US & EU households and consumers economically and socially. With much subject to ongoing negotiation, the devil is in the detail, but on balance at the level of principle TTIP offers the potential of a deal that can contribute materially to renewed trans-Atlantic prosperity, as nations continue to search for a stronger economic recovery in the post-crash world.

Author Biography

Christopher Bowerin is currently an undergraduate studying Politics and Business Management at Oxford Brookes University. Christopher has a strong interest in European and American politics, Middle Eastern Affairs, international conflicts and post-war reconstruction.

Twitter: @KBowerin

*Cover image ‘TTIP_13-06-18_13‘ by Campact (Photo from Jakob Huber)

The Great War. Part II

The second stage: 1871 – 1890

Sedan, 1870. The guns are silent now, the soldiers can rest. And they can smile, because their nation raises victorious after the battle. And then they see him, the architect of the victory, the creator of the diplomatic moves that led to the wars that he won, the man that on the following year would materialize the dreams of many and create a new nation that would be one of the main protagonists of the 19th and 20th century. Even nowadays it is among the most relevant nations of the world. They see him on a horse, escorting a carriage that transports a very important prisoner: Napoleon III. From that very day, a new nation would emerge, and its impact would be of an unprecedented scale to such an extent that it would become one of the most important nation of the era.

That very same day also marked the beginning of the second stage on the path towards the First World War, setting the course of the defeated nation and its hunger for a revenge, while in turn the challenges of the victorious nation were increased. The two decades following that battle had a key figure that not only brought Prussia a decisive victory over France, but marked the pace and shaped the diplomacy that would eventually increase the speed at which the Great War was approaching after his dismissal in 1890: Otto von Bismarck.

The rise of Prussia and the Birth of Germany

The appearance of a new power such as Prussia had a huge impact for the other great powers but it did not come from nowhere. It’s appearance on the world stage shattered the status quo that the Austrian Empire was so keen to keep and the same status quo that even Prussia itself had sought to keep. For both Prussia and Austria, the issue was the impending German unification and under whose leadership it should be under: Berlin or Vienna. These tensions finally led to the German War; the clashes over the administration of Schleswig-Holstein, the immediate spark of the war, and the first victory of Prussia against a former ally and a local important foe.

This war was just the first milestone of the German Unification and the country’s eventual rise as a Great Power. As Kennedy (2004) points out, it was a victory that no one expected given the relatively weak position of Prussia and the small size of its army. This surprise was possible simply because of two main factors: the diplomatic abilities of Otto von Bismarck, which meant that no other power would intervene during the German war, and the fact that the already weakened Austrian army was more concentrated on asserting its interests in Italy rather than over the small German States[i]. Moreover, the Austrian concentration on Italian matters were, again, a result of Bismarck’s diplomacy after pushing Italy into a limited war against Austria as a strategic diversion. As a result, Prussia incorporated territories such as Schleswig Holstein, Hannover, Hesse-Kassel, Nassau and Frankfurt, while Italy in turn received Venetia[ii].

The second milestone was the Prussian-French War which led to the definitive rise of Germany and which, in turn, increased the tensions between Prussia and France following the annexation of the aforementioned territories. As Kennedy (2004) remarks, it was the fight for supremacy in the West. This war in particular saw the defeat of France as a result of its overconfidence in its military strength and a misled belief in a possible Austrian and Italian intervention in its favour, as well as in its increased naval power. But yet again, Bismarck’s diplomacy in rallying allies among the same German states while securing the promise of no intervention from other great powers played in favour of Prussia and against France.

The other factor behind the Prussian victory was the military preparedness of Prussia which marked the country as one of the first to begin a military revolution and take advantage of the new technologies brought by the industrial revolution[iii]. That same preparedness would be feared and be a cause of concern primarily for France, and later on for the British Empire, in the years prior to the war. Along with that preparedness came the fact that Prussia was economically strong and thus able to sustain a powerful army, and its population had a generally high level of education and preparation in comparison to other nations (Kennedy, 2004).

This means that despite the efforts of the previous Kanzler to maintain the status quo along with Austria and Russia, the matter of the German Unification and the fact that Prussia was strong (despite being the less important of the Great Powers after the Napoleonic Wars), such status quo was doomed to be altered by a rise that now seems imminent but was unexpected at that time.

An altered equilibrium

Does this mean then that the rise of Prussia and the unification of Germany are the ultimate cause for the outbreak of the First World War 42 years later? The answer might be obvious for some but in reality it isn’t. Simply because, as it was stated in the first part of this series, the same multipolar structure of the international system and the eagerness to preserve the equilibrium after the Napoleonic Wars paved, if not shaped, the long path towards the tragedy of 1914 – 1918 and each and every Great Power, by its action, ingenuity or even inaction and the lack of the study of the American Civil War, shares its part of the blame[iv].

Indeed, the rise of Prussia/Germany caused a great trauma to the equilibrium of power and left the French without the provinces of Alsace and Lorraine, paying war compensations to the Germans (the idea was to leave France weakened) and with a thirst for a revenge[v]. Russia was also concerned and resented witnessing the rise of a stronger Germany instead of the weak and always keen to cooperate Prussia, but it approached Germany to obtain support for its interests in Central Asia and the Balkans regardless. Meanwhile Italy – also recently unified – closed ties with Germany as well as Austria (Kennedy, 2004).

In turn Italy, with its own emergence, concerned the French sparking a strong rivalry between the two for the control of the Mediterranean Sea.

The British Empire expressed little concern on the German Unification and rise until the 1890’s while facing a naval race and a colonial contest mainly with France and facing some clashes with Russia in Central Asia. Germany in turn, found itself being an even stronger power with considerable influence backed by an industrial and economic strength and by being the center of diplomacy in Europe. But that same strength was creating concerns for others and challenges for the new Great Power itself; challenges that needed to be tackled (Kennedy, 2004).

However, Germany was not alone in affecting the equilibrium of Europe. It was Austria that really helped further accelerate the path towards the Great War. After its catastrophic wars with Germany and Italy and after losing territories in the latter, the Empire focused more on the Balkans by annexing Bosnia and Herzegovina, infuriating both Russia and the Ottoman Empire. It was Russia, the strongest contender and the main cause of concern for Vienna, who came closest to provoking a war in 1879 until Bismarck came to the rescue. Ultimately, despite the avoidance of war, the annexation simply increased the internal problems that Austria had been already facing for many years (Segesser, 2013)

Bismarck: Realpolitik and Gleichgewicht.

Bismarck was conscious of the impact that the victory in the German War and the subsequent Unification and rise of Germany and the concerns it raised. In response, he decided to set up diplomatic and strategic schemes in order to decrease the impact on the international system, keep the peace and even keep the new equilibrium, while promising that Germany would not make further expansions and keeping the decadent yet useful Austria alive, and also avoiding a war in 1875 against France and aiding conflict resolution in the Balkans that would have sparked another war (Kennedy, 2004)[vi]. But Bismarck also intended on keeping the newly political position of Germany intact. To prevent a French revenge war, he created diplomatic and strategic schemes that were essentially a set of alliances, such as the Triple Alliance (Germany, Italy and Austria) in order to neutralize Austria and France should the latter seek support from Italy or Austria (Kennedy, 2004).

The League of the Three Emperors (Germany, Austria and Prussia) was also created and in turn, by 1879 a secret defensive alliance between Austria and Germany against Russia was created, along with a treaty of reinsurance with the latter in order to ensure its neutrality in case Russia or Prussia were at war with a third party (Kennedy, 2004; Morgenthau 2006). Besides securing Germany and its new position, German foreign policy had another aim and tool which was the intended isolation of France so the latter could not find a potential ally to wage a revenge war against Germany (Segesser, 2013)

As a result, Germany, through these alliances, not only secured itself against France and its own position within the international system, but also persuaded Russia and Austria away from supporting France and succeeded in being perceived as a country that promoted peace to such extent that during the Anglo-Russian clashes, German neutrality was useful for Bismarck to drive attention away from German and French clashes.

The main countermeasures made by France were an investment of capital in Russia and making some efforts to modernise the Russian army and also investment in Italy in order to take away from German influence (Kennedy, 2004). This counter diplomacy contributed to the acceleration of the pace towards the Great War simply by shaping a later coalition against Germany, an objective aided by the clumsy policies adopted by Kaiser Wilhelm II after Bismarck’s dismissal which ended the isolation of France allowing it to find powers like the British Empire a useful ally to fight against Germany.

The Colonialism

The decade of the 1880’s also saw a spark of colonisation primarily in Africa leading to competitions and races, not to mention clashes, between France and the British Empire in which the Suez Canal was the main object of disputes, along with Egypt, West Africa and Congo. This went along with a naval race while France was having tensions with Italy and while it was also preparing for a revenge on Germany (a fruitless effort at the time). This colonial contest and naval race was frightening the British Empire as it was not only affecting its strategic strength but was also creating a general atmosphere of a potential conflict between England and France, and perhaps repeating the same set of alliances and belligerents of previous wars waged by each Great Power (Kennedy, 2004).

In the short term this situation was countered by the Berlin Conference of 1884 – 85 in which Germany played again as the great balancer by contributing to the definition of the African colonial borders, trade and navigation and was helped by Germany’s apparent lack of interest for its own Colonies, despite actually creating some (Kennedy, 2004). The problem with the African issue in the long term is that later on Germany was not satisfied with the division and unleashed its own challenge to the order – and the British Royal Navy – after 1890, the next and last stage of study. Italy would do the same even after the First World War. Still, and paradoxically, a conference that was intended to be a mechanism to solve disputes and set a new status quo on Africa was later used by a less brilliant late German government as an alibi to contest the division over the continent.

The lessons (?) of the Prussian – French War

This par might sound positive but it wasn’t at all, simply because every European army, following the defeat of France by Prussia, wanted to structure its military actions for the future with victories a la Prusse: the use of railroads for deployment and supply; a large scale mobilization of troops; and the implementation of their own General Staff preparing fast and short-term operations (or wars) to be executed by troops equipped with fast-shot rifles and with those troops coming from massive and short service-based armies. This in the hope of gaining a victory within a few weeks.

The problems here are twofold. First, that the dynamics of the American Civil War were not studied despite the fact that many of the previously named elements were already present and secondly that there was little understanding of the new weaponry’s nature, which would help the troops that were more of a defensive rather than an offensive attitude (Kennedy, 2004).

If that explanation is not evident enough just remember that 100 years ago, when the declaration of war began to cascade, all of the General and Staffs believed that a short victory could be achieved in few weeks by troops with fast-shot rifles in their hand or at least by Christmas. Moreover, all of the plans made in the final stage (1890 – 1914), had that assumption as their core or to put it in other words, a victory a la Prusse was the soul of every plan and the undeclared objective of all of the armies. But the First World War would teach a terrible lesson with the lives of many soldiers that had expected to be back home and victorious by Christmas of 1914, but would instead find death by the hand of those new weapons that favoured defence more than the offence with the trenches being their mortal sister. At some point, then, the victory of Prussia/Germany in 1870 not only altered the international equilibrium and opened the way for a crisis just waiting to explode, but also set an unfortunate mindset in every army as a consequence of its brilliance and impact.

Meanwhile…

Austria was the most affected by the German rise aside from France. It won a new ally by securing an alliance with Germany after it sought for German assistance in the event that it were attacked by Serbia as a result of its ethnic problems thus shaping one side of War that by now was almost an inevitability. However, despite this alliance, Austria was the least important power and suffered the sad fate of being important only because of its position on the map, acting as a counterweight of convenience.

Elsewhere, Japan was under the Meiji restoration and despite being on the rise was yet to play a significant part during this period and focused instead on asserting its domestic interests. Similarly, the United States was simply increasing its economic power, while Russia was witnessing a gradual worsening of its inner problems (Kennedy, 2004; Segesser, 2013).

Towards the last stage (1890 – 1914)

It is clear that the efforts made by Bismarck to secure an equilibrium were initially fruitful but they were not able to counter the French desire for revenge and to some extent paved the way towards a war. His diplomacy granted the equilibrium and contributed to the change in the international structure by unifying Germany as well as to decrease the impact of such an event. But that style of diplomacy and the institutions or alliances that he created were useful and harmless in skilled hands such as his. The war he fought in France marked a brilliant milestone but had the unintended and unfortunate effect of reshaping the military mindset of the Generals of practically every army. The problem is that a successful strategy can be used, at most, twice – something that by 1914 not every commander fully understood and as a result, many would pay with their lives, especially when the weapons used in 1870 weren’t that much akin to those of 1914. It simply stimulated an unrealistic strategic approach.

Overall, Germany with Bismarck at the head, was able to keep the equilibrium and maintained the peace for a good amount of time until 1890 when he was dismissed by Kaiser Wilhelm II. The mechanisms and policies implemented by him then fell into less skilled hands and that dismissal, rather than Bismarck’s policies of the German preparedness for 1870 war, are an important factor to explain what really sparked the Great War. The fact that Austria also decided to intervene in the Balkans as a way to compensate for the defeat with Prussia simply worsened the overall situation and created a condition that in the long run was impossible to tackle for less prepared decision makers (Wilhelm II, for instance), regarding German Foreign Policy. Last but not least, France was preparing for a revenge on Prussia and to recover Alsace and Lorraine. As a result it also shares a large part of the responsibility due to this political obsession.

If a sentence of Otto von Bismarck could sum-up the times to come after his dismissal, it is certainly this: “Jena came twenty years after the death of Frederick the Great; the crash will come twenty years after my departure if things go on like this”. And the crash came in a very tragic way, a crash that not only caused a certain crown to be lost, but shattered the lives of many and irreversibly changed the course of world history, politics and the future of warfare.

Sources:

Encyclopaedia Britannica (2014). Seven Week’s War. Retrieved from http://www.britannica.com/EBchecked/topic/536531/Seven-Weeks-War on 26.07.2014

Kennedy, P. (2004). Auge y caida de las grandes potencias [The Rise and the Fall of the Great Powers, Ferrer Aleu, trans.]. Barcelona, Spain: Mondadori (Original work published in 1987)

Morgenthau, J. A (2006). Politics Among Nations. The Struggle for Power and Peace (Revised by Thompson K. W, & Clinton D. W. 7th Edition). New York: McGraw Hill.

Segesser, D. M (2013). Der Erste Weltkrieg in globaler Perspektive. Stuttgart, Deutschland: Marixverlag.

[i] It also helped the Prussian cause the fact that Russia was very weak and still recovering from its defeat in Crimea. Great Britain was also not so interested in interfering on continental issues, precisely as a consequence of the aforementioned war and its own policy of not interfering.

[ii] See: Encyclopaedia Britannica (2014). Seven Week’s War. Retrieved from http://www.britannica.com/EBchecked/topic/536531/Seven-Weeks-War on 26.07.2014

[iii] Kennedy (2004) denominated such preparedness as a “Military Revolution” that implied an investment not only in equipment but also in the quality of the army, by introducing a short service time and reserve, increasing (and taking advantage of) the high level of education among troops and officers and by introducing the General Staff system, which in turn, made the operation plans prior any contemplated conflict, introduced the implementation of manoeuvres, the study of the military history, the supervision of the railroad for the sake of a fast an effective deployment of troops and supplies, the operational independence of the units and the stimulation for initiative and the openness for self-learning.

[iv] And that is why it is important to look, with a more historic and retrospective maturity, the causes for the First World War in a longer term and assessing the factors that appeared in the three proposed periods of study and their responsibility that each Great Power had for the First World War.

[v] That thirst set the course of most of the French foreign policy for the rest of the century until the Great War, and such attitudes explain not only the patriotism during the war but also the French insistence on a high and strong punishment for Germany in 1919. See also: Segesser, 2013, pp. 29 – 30.

[vi] This mediation led to a compromise for Russia to be respected. This unleashed nationalism in Russia that was resentful against Germany due to that same compromise forged by Berlin. The conflict in question is the Russo – Turkish War of 1877 – 1878.

*Cover image ‘17th Battery C.F.A. firing a German 4.2 on the retreating Boche. Photograph taken during Battle of Vimy Ridge’ by Library and Archives Canada

Holocaust Denial as a Legal Issue in Europe

Emilie Mendes de Leon reflects on Holocaust denial as racist speech in Europe and explains the special place it holds as a legal concept in many member states of the EU.

In the past 70 years, the term genocide has become cemented in popular and legal discourse. Its origins lie in the aftermath of the Second World War and promises of “never again”, even though as Rwanda, Yugoslavia, and Cambodia have shown us, genocide has happened “again and again”. You will find few who disagree that denying or trivializing such atrocities is inappropriate and criminal. Yet in Europe, the Holocaust, the very reason for the creation of the term genocide as well as numerous cultural and political institutions mandated to prevent such violence, is distinct from other genocides in law. Holocaust denial is different because it has become a part of anti-Semitic discourse in Europe while the denial of other genocides has not. Therefore, punishment specifically reflects the particularity of Holocaust denial as an expression of racism in Europe.

In the late 1980s and early 1990s, acts such as French politician Jean-Marie Le Pen decrying the gas chambers as “a detail of history” on public radio and the popularisation of books by Holocaust deniers such as David Irving and Arthur Butz gave way to legislation prohibiting denial of the Holocaust in the 1990s. The French legislature passed the Loi Gayssot in 1990, Austria passed similar legislation in 1992, and Germany did the same in 1994. Since then, nine other Member States have introduced provisions explicitly prohibiting denial of the Holocaust or acts by the National Socialist Regime in Germany.

However, even in the absence of specific laws banning Holocaust denial, other national courts and the European Court of Human Rights have also emphasized the particularity of the Holocaust. Though framed by each country’s particular experience, cases in Belgium and Germany demonstrate that the national courts generally treat the Holocaust differently. In a 1980 ruling, before German legislation explicitly prohibited Holocaust denial, the Bundesgerichtshof argued that Holocaust denial was an “assault to the dignity of every Jew in Germany” considering their “special relationship vis-à-vis their fellow citizens” to which all others have “a special moral responsibility.” In the Belgian case, Verbeke et Delbouille, the court ruled that Holocaust denial threatens stable democracy and insults the Jewish people. In the Garaudy case, the European Court of Human Rights (ECtHR) called Holocaust denial “one of the most serious forms of racial defamation of Jews and of incitement to hatred against them.”

Although other genocides, specifically the Armenian genocide by the Ottomans in 1915, have been recognized by many states for decades, only recently have there been attempts to criminalize denial of specific genocides other than the Holocaust. Switzerland, France, Slovakia, and Greece have passed laws explicitly banning denial of the Armenian genocide, but some of these laws have faced trouble. The French Conseil Constitutional struck down the French law in 2012 shortly after it was passed, and in a challenge of the Swiss law, the European Court of Human Rights found that denying the Armenian genocide was not an abuse of right to freedom of expression in the way that Holocaust denial was. The Court reasoned that calling the Armenian genocide an “international lie” was not of a nature to incite hatred towards the Armenians nor was it contrary to the spirit of the European Convention on Human Rights.

The European Union includes denial crimes in a Framework Decision (legislation that addressed criminal justice matters before the Lisbon Treaty) on racism and xenophobia. The Framework Decision requires Member States to punish acts “publicly condoning, denying or grossly trivialising crimes of genocide, crimes against humanity, and war crimes as defined… in the Statute International Criminal Court” and in a separate clause requires Member States to punish “condoning, denying or grossly trivialising” the Holocaust. Here Holocaust denial is presented separately from other crimes of genocide, and 12 Member States have implemented laws explicitly addressing Holocaust denial.

It is worth noting that the Spanish Tribunal Constitucional has found that “mere” denial of the Holocaust is not necessarily an affront to dignity. In Spain, Holocaust denial must be accompanied by contempt or incitement to hatred in order to be punishable within the meaning of the criminal code. Qualifiers in the text of the Framework Decision do allow Member States to limit punishment of genocide denial to cases where it is likely “to incite to violence or hatred,” “likely to disturb public order,” or “threatening, abusive or insulting.” However, even in other states which take a broad interpretation of freedom of expression, such as the UK and Denmark, the courts have treated Holocaust denial as hate speech. The European Court of Human Rights agrees with Spain when it comes to genocides, but the Garaudy ruling makes clear that Holocaust denial ipso facto is an incitement to hatred. In this regard, the Tribunal Constitucional stands alone.

This special place of the Holocaust in the history of several Member States and the European integration project as well as the rulings by the European Court of Human Rights justify the acceptance of Holocaust denial as a racist crime in Europe. Denial of other genocides, while offensive, could still be effectively prosecuted under hate speech laws of Member States if it were threatening or incited violence. However, in most states, denying other genocides does not constitute a racist crime ipso facto because their denial or minimization has not emerged as an expression of racism. Holocaust denial has been accepted as a racist crime in the legislations of many Member States, in the national courts, and in the European Court of Human Rights. Denial of other genocides, while terrible, has not yet emerged as an inherent expression of racism in experience or practice. Consequently, it does not appear as such in law.

Cover and In-text Image: ‘flowers’ courtesy to Paolo Dallorso, released via flickr.com under creative commons 2.0. No changes were made.

The Iran Nuclear Deal: Diplomatic Triumph or Ticking Time Bomb?

The Joint Comprehensive Plan of Action (JCPOA) agreed to in Vienna on July 14th 2015 by the P5+1 countries and Iran marks the end of three decades of animosity between Iran and the United States. Tehran and other global capitals celebrate this historical landmark, which heralds diplomacy over warmongering. The international community has waited two long years for an agreement to be reached, as talks of negotiations started in September 2013 when a substantive meeting was held between Iran and the P5+1 countries; this was the first of its kind since 1979.

Whilst Tehran views the deal to be a victory for Iran, the celebrations are somewhat premature as Israel and some Gulf States condemn the agreement. Furthermore, Iranian hardliners remain sceptical and this sentiment is mirrored within the American congress too. Even as Iran re-joins the international stage to finally start to abandon its pariah state status, the Iran nuclear deal is unfinished business. This article aims to delve deeper into the multi-faceted nature of the signed deal, looking mainly at reasons for and implications of the latter.

Terms of the JCPOA

A thorough outline of the JCPOA terms is beyond the scope of this article, and so, two important parameters are to be drawn from the extensive agreement. Strict limits will be placed on Iran’s path to nuclear weapons for a decade in return for relief from international sanctions.

Two types of radioactive materials go into the making of a bomb: uranium or plutonium. Iran has about 19,000 centrifuges and this raises serious concerns over nuclear weapons proliferation. Centrifuges are machines used to enrich uranium for the purposes of a nuclear reactor or a nuclear bomb. Iran’s clandestinely hazy demeanour over the last two decades regarding its nuclear arsenal has brought the international community to question the country’s abilities and subsequently there have been several desperate attempts to curb the country’s nuclear capabilities. Since 2002, several EU and UN sanctions have been placed on Iran, which have targeted significant energy and financial sectors. As a result, Iran’s economy has been severely paralysed over the last decade and a half. Key points of the deal require Iran to disassemble much of its nuclear infrastructure and reduce current capacity of 19,000 centrifuges by more than two-thirds in order to re-enter the international community.

The Iran nuclear deal indeed does echo victory for the Obama administration and President Rouhani; however, the deal is not without complication. Economically, it is a done deal; one that could bring Iran onto the path of economic recovery, opening trade relations with the rest of the international community. Politically, however, the story is slightly more complex.

A glance at the past: US-Iran Relations

Elites belonging to the political sphere and media commentators have all expressed opinions on the Iran nuclear negotiations and the July 14th deal but commentary and analyses began as the severity of Iran’s nuclear program and secret plants came to the fore in 2002. That same year, President Bush coined Iran as one of the ‘Axis of Evil’; a metaphor aligning the nation to the 1930s where Axis powers were evil and therefore something had to be done about them. Fortunately for Mr Bush, such a declaration led to the unfortunate perception building for the way the West views Iran and vice versa.

Neo-conservatism within the Bush administration postulated a rather confrontational attitude towards Iran. The Obama administration on the other hand, appeared to drift away from such a stance, although neo-conservatism still very much exists in the corridors of power, persistently so in the aftermath of the Vienna deal.

Over a decade later, diplomatic dialogue has taken precedence over those who oppose such a deal. Some may argue that neo-conservative readings of US foreign policy towards the Middle East are full of myths that need debunking but that won’t truly serve a purpose for this article. Yet, it is important to make mention of neo-conservatism as it does serve to lay some foundations for why the international community witnessed such a delay for negotiations to come to the table in the first place. That may lend itself to neo-conservatism within US foreign policy and that of Iran too.

Anyone with an opinion on the Iran nuclear talks knows that the only thing binding Iran to this deal is a huge ‘if’, an ‘if’ that assumes Iranian responsibility and commitment to uphold the parameters of the deal. Iran’s past points to a lack of commitment to upholding any deal in view of the acquisition of nuclear power. Distrust therefore lingers over the JCPOA, and as President Obama states himself, the deal is built on verification and not trust. The deal thus does not seem to have turned Iran and the US into long lost friends, but has opened up an avenue for dialogue.

Not a done deal: Opposing Forces

The advantages of the nuclear deal are morose, especially in the face of quite powerful opposing forces. Israel, the Republicans, and hardliners within Iran all condemn this deal in the belief that Iran has gained a lot more than it bargained for, which paves the way for Iranian hegemony in the Middle East and consequently endangers the West and more ferociously, Israel. However, this could all be talks of paranoia. At least one can admit to the fact that, despite a diplomatic win, it is not so easy to forget the enemy. Iran is certainly considered an enemy by many.

What is Israel’s issue?

 Backing the Iran nuclear deal will result in a vulnerable Israel. At least, that is many Israelis’ attitude towards this deal. Whether this is plausible or not remains a debate of greater length. There is a historical, emotional and psychological precedence for continued Israeli opposition to this deal. In a post-Holocaust world, a nuclear Iran (despite weakened by the JCPOA) poses a colossal threat to Israel. As the Iran deal alleviates international sanctions, the extent of Iran’s economic recovery as a result is unbeknownst to us. Iranian support for Lebanon’s Hezbollah, a group much like Israel’s own rival Hamas, troubles Netanyahu. Whilst Iran and Israel do not share a border, it does share borders with Syria and Lebanon – both countries pertaining to the proliferation of violent non-state actors. Hezbollah upholds an ideology that assumes the annihilation of a Jewish State in the Middle East and Hezbollah is argued to be a proxy of Tehran, who has provided the group with arms. It is thus not difficult to put two and two together. Iran’s nuclear program may have been weakened by this deal but Iran’s actions in the past suggest that the country has the potential to cheat. If the deal is not honoured by Iran, Israel is in deep danger and so is the rest of the world.

A Divided Congress

Republican opposition to the Iran nuclear deal is bizarre. One can understand Israel’s opposition and even the condemnation of some Gulf States (Saudi Arabia in particular) but why the Republicans are adamantly against this deal is rather a chimera. Or not – because the only way to explain this would be to resort back to the Republicans genetic construction; after all, the attitude towards the ‘evil’ Iran since Bush coined it so has not altered. Iran is still a bad actor and the Republicans are determined to undermine the deal. The Republicans call this deal appeasement, their conservative makeup refuses to see this as a peaceful alternative; the other alternative is war. Perhaps the Republicans would like to go to war rather than diplomatically coming to a peaceful resolution?

Republican opposition to the Iran deal is a result of several interwoven factors. As described above, ideology plays a big part. Secondly, Israel has a lot to do with this. Pro Israel lobby groups exist in abundance and do often influence Congress; their influence will certainly not go amiss as they battle to kill the deal. All that is necessary to undo the deal whilst it is still under the 60-day review is for Congress to gather a two-thirds majority of the House and Senate. This is no small matter, however. The Republicans command a majority in both House and Senate and this can contribute to the undoing of the Iran deal but Obama’s veto power will prove a hurdle. Obama will have to convince at least one-third of the Democrats to vote with him whilst the Republicans will have to persuade a significant number of Democrats to oppose the President. There are some democrats who share Israeli concerns over Iran and may well go against the President – that remains to be seen in September

Iran Deal: Implications for World Relations

A diplomatic success in a region diseased by conflict sends a very strong message. But solely the renewed Iran-US relations cannot measure this success; many other actors benefit from this deal. Iran holds the fourth largest crude reserves and the second largest natural gas reserves. This is music to the ears of corporations such as European and Asian companies. If corporations benefit from such a deal so will their countries. International sanctions barred Iran from the use of SWIFT – the financial bloodline. As sanctions are lifted, Iran is able to re-join SWIFT and fully reintegrate itself into the financial whirlpool.

Its reintegration into the world economy highlights a few points. The deal opens the door to a more lucrative flow of commerce between Iran and Eurasian markets; one such market is Turkey, an already energy-poor country. The Iran deal therefore reinforces Turkey-Iran energy trade relations, which arguably is beneficial to the region.

We cannot forget Russia when looking at the implications of the deal. With Iran joining the global oil and gas market, prices will irreversibly decrease. Russia thus fears that the relief from international sanctions on Iran could disrupt its hegemony over European energy imports. Trade relations with Europe have been shaky to say the least ever since Russia’s annexation of Crimea resulting in a series of EU sanctions affecting several sectors including energy. In light of this, a more amicable Moscow-Tehran relation is probably necessary. Severed relations with the West have compelled Russia to look elsewhere and Asia and the Middle East are obvious options, including Iran. Therefore this nuclear deal can also present itself as an opportunity in disguise.

Finally, what does this deal mean for Europe? After all, is it not a European slogan to favour soft power over hard? For that reason, Europe should be overjoyed that effective multilateralism has prevailed in the plagued Middle East. The EU’s efforts in pushing for the Iran deal have not gone unnoticed; that is because Europe benefits tremendously from this deal. Whilst no European power believes that this deal will change Iranian anti-American and anti-Israeli rhetoric or bolster significant change in the Middle East, Europe backs this deal for a few reasons.

The Iran deal is a win for Europe because it helps the EU to act as a more assertive player in the Middle East; and it has failed to do so strongly prior. It proves that multilateralism should not be underestimated and powers of diplomacy as opposed to military resolutions should be consulted. We can look to the economic benefits of a nuclear Iran deal to Europe but by doing so we fail to recognise the larger picture. That picture is the success of diplomacy – a word that resonates as a founding principle for the European Union as a normative power in global relations.

The Iran deal was not going to please everyone and for good reasons but assessed within the context of a crisis-ridden Middle East, it is a success. We just have to hope that Iran adheres to the terms of the JCPOA. But if Iran cheats, it will get caught and if it does get caught, the consequences will send the nation crying. Iran’s best bet is to comply; or we can expect a full-on war, which the deal sought to avert in the first place. Complications adorn this deal, paranoia and distrust will continue to send critics talking for a long time but in the end, let us give diplomacy a chance.

Author Biography

Ayooshee Dookhee is a Politics and International Relations graduate from Royal Holloway, University of London. She is currently working for a public policy, public affairs and campaigns consultancy in London. In September 2015, she will be starting a Masters in International Relations with a focus on Middle East politics at the University of Leiden in The Netherlands. She has a growing interest for issues and conflicts in the Middle East having completed her dissertation on the Palestinian-Israeli conflict. Her further interests are in international human rights, women’s rights, gender and minority equality and the politics of the European Union.

She can be found on LinkedIn

*Cover image ‘Iran Deal Reached in Vienna – 14 July 2015‘ by the European External Action Service

Genetically Modified Food and the Second Green Revolution

Mervyn Piesse, Research Manager, Global Food and Water Crises Research Progarmme – Genetically Modified Food and the Second Green Revolution – Download PDF

Key Points

  • Just as the Green Revolution was made possible by multiple technologies, such as higher-yielding food crops, more efficient irrigation, fertilisers and pesticides, the second Green Revolution will be made possible by numerous innovations.
  • While the Green Revolution is arguably still ongoing in parts of the world, most notably sub-Saharan Africa, other regions have reached a barrier and a new wave of research and development is required if agricultural production is to sustainably increase.
  • Genetic modification has mainly been applied to crop production where it is mainly used for the control of pests and disease. In livestock it is used to speed up the rate of maturation or make animals more resistant to disease.
  • Diligent testing, regulation and oversight of genetically modified food will help reduce any possible associated risk.

Summary

The genetic modification of food is heavily politicised and there are numerous arguments for and against the practice. Proponents of genetically modified (GM) food claim that it will increase food security, primarily by making agriculture more efficient, while detractors argue that the technology is potentially dangerous and the cultivation of GM food could have unforeseen consequences that will ultimately prove detrimental to food security. By examining the use of GM in plant and livestock agriculture, this paper will consider both sides of the debate. Ultimately, some varieties of GM food have the potential to strengthen global food security while others are likely to be self-defeating in the long-term.

Analysis

Genetic engineering, which involves the modification of the genetic makeup of living organisms, has been practiced for over 40 years and commercial applications have been available for the last two decades. Biotechnology, of which genetic engineering is a part, is a relatively young field and it is yet to reach its full potential. Equally, however, the possibility of unintended or unforeseen problems arising from the technology remains and it would be irresponsible to rush into the widespread production of GM food without first testing the safety of each product.

Just as the first Green Revolution was composed of multiple innovations that improved agricultural production, the second Green Revolution will also be made up of many technologies, one of which is likely to be genetic engineering.

The Green Revolution

The Green Revolution refers to a series of research and development projects that increased agricultural production globally. Prior to the beginning of this process, in the 1930s, there were fears that the world was facing a Malthusian future in which the population would grow faster than the food supply.

Despite increased land scarcity and the population roughly doubling, the Green Revolution ensured that cereal crop production increased three-fold. This increase in agricultural productivity was driven by international research organisations, such as the International Maize and Wheat Improvement Centre, the International Rice Research Institute and the Consultative Group on International Agricultural Research. Much of the research conducted by these organisations was, and continues to be, state-funded. Private companies now conduct significant biotechnology research, which is often profit motivated. This distinction, between public and private interests, informs a large part of the argument against the adoption of GM food. As private interests are perceived to be more profit-oriented there are fears that they will be less inclined to act in the public interest.

While these international research institutes mainly focussed on the development of high-yielding crop varieties, other improved inputs, such as fertiliser, irrigation and pesticides, also drove agricultural production increases during the Green Revolution.

Not all outcomes from the Green Revolution were positive, however, and solutions to these challenges are yet to be adopted in many parts of the world. Unintended consequences relating to unsustainably high water, pesticide and fertiliser use have degraded landscapes. Some of the reduction in yield growth since the mid-1980s is associated with this degradation. For the most part, these challenges were caused more by the policy environment in which these inputs were used. In some countries water and other inputs were, or continue to be, heavily subsidised. With these policy settings there is often little impetus for farmers to reduce the use of these inputs until yields begin to noticeably decline.

Arguably, the Green Revolution also increased global disparities in the agricultural sector, as some parts of the world missed out on most of its benefits. Sub-Saharan Africa, for instance, did not experience the same level of agricultural development as other regions, leaving it even further behind. While the International Institute of Tropical Agriculture was established in Nigeria in the 1960s, it was not until the 1980s that its work began to increase yields in the region. Most of the yield increase came from the development of non-traditional crops, such as maize. Crops that are widely grown in sub-Saharan Africa, such as sorghum, millet and cassava, which many poorer, smallholder farmers continue to grow, largely went without any major increase in yield.

The first Green Revolution is not yet complete in sub-Saharan Africa. Many subsistence farmers are yet to adopt mechanised farming practices. They are also yet to switch to higher-yielding crop varieties, improved irrigation systems and increased fertiliser use. Sub-Saharan Africa is in an enviable position. It can learn from the mistakes made elsewhere, such as the overuse of groundwater and fertilisers that have left many regions of the world in a far less secure position. The region stands to benefit from the advances made in precision agriculture, which allow for more targeted use of water and fertiliser.

While sub-Saharan Africa is still able to reap rewards from the first Green Revolution, other parts of the world have reached a barrier. Agricultural research and development programmes are required to lift yields and identify new methods of sustainable farming that can produce more food with fewer inputs. Without continued effort to secure the food supply for future generations, global food security is likely to come under pressure. Over the course of the twenty-first century, the food supply is likely to be strained by demographic change (particularly population growth), and changing climate conditions.

The Second Green Revolution: The Role of Genetically Modified Food

There are calls for a second Green Revolution to drive new agricultural research and development programmes. Arguably, with the spread of precision agriculture and biotechnology, the second Green Revolution is already underway. Just as the first Green revolution was composed of multiple innovations, the second is also likely to be built on multiple technologies. Genetic engineering is one technology that will fuel the second Green Revolution and improve global food security.

GM is a broad field and there are many types of GM products available or in various stages of development. Some products use GM techniques to promote more rapid growth, resistance to disease and pests, or to deliver vitamins, minerals or nutrients through a process known as bio-fortification.

There are also two ways to genetically modify an organism. The first, known as transgenic modification, involves the movement of genetic material from one variety or species of organism to another. The second involves changing, or editing, the existing genetic material of an organism.

Genetically Modified Crop and Livestock Production

There are two groups of GM crops that are currently widely grown. The first are altered to make them resistant to herbicides, such as glyphosate. These crops allow farmers to control weeds without harming their crop. The second type, known as Bt crops, produce a natural insecticide inside the parts of the plant that pests eat. These crops have genes from a common soil bacterium, Bacillus thuringiensis, which has been used as a natural bio-pesticide for nearly a century and continues to be widely used in organic agriculture. Other GM crops that are currently in development, such as non-browning fruit and vegetables, could reduce food waste, which is a major contributing factor to global food insecurity.

In livestock production GM technology is used to make animals grow at a faster rate or more resistant to disease. Commercial applications of genetic engineering in the livestock sector are currently limited to salmon. AquAdvantage salmon, a genetically modified breed of Atlantic salmon, is an early example of GM livestock. The fish has two genes from other fish species: a growth hormone gene from Chinook salmon that is under the control of a genetic “switch” from Ocean Pout. Conventionally, Atlantic salmon produce growth hormone only in summer months, but with these two genes AquAdvantage salmon produce it throughout the year. Uninterrupted exposure to the hormone means that AquAdvatage salmon reach maturity in 18 months instead of the 36 months taken by conventional Atlantic salmon. The faster growth rate allows producers to grow and harvest more salmon in a shorter period of time.

The Case for and Against Genetically Modified Food

Advocates of genetic modification maintain that people have manipulated plant and animal genetics since the beginning of agriculture and all biotechnology does is speed up this process. While GM techniques enable scientists to quickly alter the genetic makeup of organisms, and do away with the laborious and time-consuming task of traditional selective breeding, they do result in organisms that probably would not have occurred naturally. This, however, does not necessarily mean that GM food is unsafe or environmentally harmful.

Many opponents of GM food argue that because these foods do not occur naturally they are somehow dangerous to the environment or are unhealthy. GM products undergo rigorous tests to ascertain their risk to human health and the natural environment prior to being released to the market. Of course, not every product can be declared absolutely safe, and unforeseen or unexpected dangers can arise, but these tests certainly reduce the risk of GM crops being unsafe.

Reducing the amount of food that is lost to pests and disease is the main reason for developing GM food. GM cassava, which is widely grown in some parts of Africa, for instance, is protected against cassava mosaic virus and brown-streak virus. Resistant crops are not without their problems, as eventually the pests and diseases are likely to evolve to overcome the resistance gene. This process locks genetically modified organisms (GMOs) into an ever escalating battle against harmful organisms. GM crops cannot eradicate pests and disease, but they do give farmers another tool to manage outbreaks. An overreliance on the technology, however, is likely to be self-defeating in the long-term.

It is also correct that there is no consensus about the safety of GM food, but, equally, there is no consensus about the danger of GM food. The debate on scientific consensus is a moot point, designed more to distract than inform. If every innovation or technology were required to pass the consensus test, there simply would be no technological advancement.

A more pertinent argument relates to intellectual property and the right to own and propagate seed. Some biotechnology companies have developed genetic use restriction technology (GURT), colloquially called “terminator genes”. When activated, these genes make the seed and the crop it produces incapable of reproducing and results in farmers having to purchase new seed from the owner of the seed technology. A major argument against GM food maintains that farmers will become overly dependent on a handful of biotechnology companies that could adopt extortionate practices. As many farmers rely on hybrid crops, which give a mixture of inferior varieties if re-sown, many of them were already purchasing seed every year prior to the introduction of GM crops. Furthermore, biotechnology companies have never made GURT commercially available and have no plans to do so. While they could renege on this commitment and introduce extortionate practices, as they operate in an industry that has clear and significant effects on the public interest they would be foolish to do so.

Biodiversity, which is an integral part of a secure global food system, could also be reduced because of GM crops. There are fears that GMOs will either interbreed or compete with natural varieties, thereby driving them to extinction. Some GMOs are less likely to enter the natural environment than other, further ensuring their containment. GM salmon, for instance, is generally farmed in aquaculture conditions far from natural water bodies. In these cases, GMOs are unlikely to interact with non-GMOs. GURT could also resolve this problem, as any GMO that escapes into the wild would be unable to breed and compete with other organisms, but taking this course of action could re-invigorate the fears of corporate domination previously discussed.

Conclusion

As GM food production has the potential to disrupt food security, a vitally important component of human survival, due diligence is not just necessary, but obligatory. At the same time, however, ignorance and fears of the unknown should not retard the growth of an industry that has the potential to considerably strengthen global food security.

Biotechnology standards, developed by the World Health Organisation, maintain that all new GM food should be tested on a case by case basis. National food safety authorities should be encouraged to adopt these standards. If these standards are not strong enough for consumers, many regions are introducing labelling laws. Labelling GM food products will help ensure that consumers have greater choice over the food decisions they make, but it is unlikely to enable them to completely avoid GM products. GM crops are often fed to livestock, especially in the US where more than 90 per cent of all corn and soybeans are genetically modified in some way.

On its own, GM food is unlikely to make the world food secure. Instead, it is part of a much larger, multifaceted and complex solution to the challenge of achieving global food security in the twenty-first century.

Biodiversity: An Integral Part of Food Security

Mervyn Piesse, Research Manager, Global Food and Water Crises Research Programme – Biodiversity: An Integral Part of Food Security

Background

Biodiversity provides an insurance policy for the global food system. If disease or other changes in the natural environment occur, different species can provide a source of genetic material that can be harnessed to protect against these threats.

Comment

The role of biodiversity in preserving food security is well established. Botanists are able to utilise genes from wild relatives of domesticated food crops to develop new varieties that are able to survive more hostile environments. This process is often cheaper and less controversial than genetic modification, but it is dependent on having access to thousands of different species to find the required genetic material.

Several food crops have been protected by selecting genes from wild relatives of domesticated species. For instance, Australian scientists have recently utilised genes from wild plant species to further the development of wheat resistant to UG99, a form of wheat rust fungus that emerged in Uganda in 1999. Winds carried the disease from East Africa into Southern Africa and the Middle East. From there it could spread into South Asia. If left unaddressed, it is believed that up to 30 per cent of global wheat production could be threatened. Finding and developing natural resistance to the fungus will help protect the world’s most widely grown food crop. Without a wide array of wheat varieties, however, the chance of finding resistant genes becomes much lower.

Over the past century, according to The Economist, about three-quarters of global crop genetic diversity is believed to have been lost. As the level of biodiversity decreases the potential for solutions to future crop diseases and changing environmental conditions becomes narrower.

Modern agricultural practices have downplayed the importance of biodiversity by concentrating on a few, mainly high-yielding, varieties. As a result, the world is highly dependent on a small number of plant and animal species for its food supply. Currently, about 30 crops provide 95 per cent of the world’s food, with five cereal crops – rice, wheat, maize, millet and sorghum – accounting for 60 per cent of the world’s food. These food crops are composed of a relatively small number of domesticated varieties.

A heavy reliance on a small number of crop species increases the vulnerability of the food supply. Reduced biodiversity increases the risk of diseases spreading through the international agricultural system. There is concern about the potential for a global sixth mass extinction that could damage food security. While extinction is a natural process, it appears to be occurring at a much greater rate than usual. Many of these concerns lean towards alarmism and will take centuries to become apparent. Almost 70 per cent of plant species are currently threatened with extinction, but the likelihood of them all dying out at the same time is remote. As plants are the basis of the global food system and wild species are a useful repository of genetic material, a loss of diversity would pose considerable problems for future food security.

International efforts have been made to store and preserve a wide range of plant genetic material. Seed banks are costly to establish and maintain and in addition to unstable financing, many face numerous challenges, such as civil unrest, urban encroachment and natural disasters. These efforts require continued international co-operation and funding.

Preserving a wide array of genetic material provides an insurance policy for future problems. While this policy is costly it helps ensure the security of the global food system, an assurance that ultimately increases the resilience of the global food system.

Precision Agronomics Australia: Frank D’Emden

Geoffrey Craggs, JP, Research Analyst, Northern Australia and Land Care – Precision Agronomics Australia: Frank D’Emden – Download PDF

Key Points

  • Farmers are increasingly relying on technology and spatial information to optimize crop and pasture management.
  • Data collection and analysis to inform soil management is fundamental to supporting cropping in the WA Wheatbelt and increasingly important for irrigated agriculture and horticulture.
  • Farmers and land managers need training and information to understand the technology and devices that are readily available to them.
  • Technological advances that can be applied to soil management and the farming sector are constantly occurring.

Introduction

In most modern industries, computerisation and the use of technology are critical components to business success. In agriculture, advancements in technology through the application of new tools and methods in soil analysis are having a profound benefit on food production and land management. The term ‘precision agriculture’ describes the use of technology to allow farmers and land managers to improve the health and productivity of their soils.

Precision agriculture uses a range of existing and emerging technologies from satellites, producing infrared images and global positioning information, to miniaturised, interconnected ground sensors that constantly measure soil moisture levels that are then fed to a computer that manages complex farm irrigation systems. Other emerging systems enable multiple farm machines to be operated without drivers, to perform the range of operations that once required a farmer to conduct.

Recently FDI interviewed Frank D’Emden from Precision Agronomics Australia, a company providing technology-based solutions to farmers, consultants and industry groups across Western Australia.

Commentary

FDI: What services does Precision Agronomics Australia offer?

FD: Precision Agronomics Australia (PAA) principally offers soil mapping services as the core element of the business. Mapping is conducted using geophysical equipment in combination with soil coring and analysis to ‘ground-truth’ the geophysical data. In analysing this data we are particularly looking to identify soil-related factors that are constraining crop growth. The uniqueness of the services we offer also relates to the equipment and methods we use to conduct our mapping and data collection. For instance, we are able to convert a basic soil map into a ‘prescription map’ to be used to apply a particular fertiliser or soil ameliorant (something that helps soil such as lime or gypsum). After entering a prescription into a farmer’s machine, as the farmer traverses his paddock, the machine then distributes product at different rates in different parts of the paddock, according to the previously collated data. This is known as Variable Rate Technology (VRT).

The ‘prescription map’ can be designed to address soil constraints such as acidity or sodicity. Here for example, we are able to target the application of lime or gypsum to different areas, depending on spatial distribution of different soil types. The data contained in the prescription map enables farmers also to apply products such as potassium, phosphorus or nitrogen in particular locations within a paddock where they are most needed and with minimum wastage.

Most modern grain harvesters come equipped with a yield mapping capability. Here PAA is able to ‘clean’ and process the collated data relative to a particular type of machine as well as calibrating the measured yields using the actual delivered tonnages. This is particularly important when multiple harvesters are used in combination on a farmer’s paddock or when less-experienced drivers are operating equipment. The yield mapping is important because it can help farmers avoid over-fertilising in one area or under-fertilising in another part of a paddock and replace the nutrients that have been removed by the crop. As well, we are able to identify the extremities of other constraints previously determined through the PAA data collection process. If we see an area that appears to be suffering (or otherwise constrained), we are able to investigate through our soil mapping to determine causes and work out the most cost-effective treatment. Sometimes this might involve soil renovation, such as deep ripping, clay spreading or spading, or in more serious cases a change of land-use may be the most appropriate action.

PAA’s soil mapping services also apply to irrigated agriculture and assist farmers to best-manage their water resources. For example, in the Harvey region, due to low dam levels this year, farmers are currently restricted to using only 40 per cent of their maximum water allocation. Therefore, it becomes vital for farmers to know and understand how to best use their water allocation by determining the optimum methods of application and irrigation, depending on soil type and other related factors.

FDI: How do these services benefit farmers and land managers?

FD: PAA works mainly in the WA Wheatbelt. Our services benefit farmers and land managers by enabling them to optimise inputs of fertiliser and soil ameliorants to where they are most needed. In the Wheatbelt, over the last 30 years, farms have tended to get bigger, shifting away from mixed cropping and livestock production in smaller paddocks that were fenced according to soil type, to larger-scale paddocks used solely for crop production that may encompass several different soil types that have varying production potential.

Most farmers these days have the building blocks for VRT where a farm machine, for instance, knows its own location, steers itself and can increase or decrease the rates of fertiliser or soil ameliorants. For growers, the technology improves crop yields and soil productivity by optimising the amounts of inputs applied from the machine according to the soil and crop requirements. Through soil-mapping and analysis of biomass and yield maps, we know which areas have a capability for higher or lower production and we can then apply land use management measures that target specific requirements.

In irrigated agriculture, a prescription map also helps farmers to determine watering regimes by enabling them to understand the variation in soil water holding capacity. Even in dryland agriculture, farmers can better govern rates of fertiliser application by reviewing their rainfall records, looking at the medium-range weather forecasts to determine the likelihood of more rain, and varying the amount of nitrogen to even out their risk exposure. These decisions can be supported by data from soil moisture sensors that provide hourly measurements of soil water content that farmers can access from anywhere on their mobile devices.

FDI: What shortfalls, if any, exist in the collection, analysis and processing of information relating to soil management?

FD: Data collection is becoming easier and cheaper. For instance, we can work out the variations in soil type and we are able to determine rates of fuel consumption in tractors in a paddock. We can also map topography and its effect on crops. The cost of soil analysis is a constraint as the geophysical data needs to be ‘ground-truthed’ against soil samples. This can represent up to 30 per cent of the costs associated with soil mapping.

We are working towards having the capability to conduct ‘in-situ’ soil analysis by taking an instrument into the field and determining exactly the physical and chemical properties of soil. There are instruments now that will determine the level of surface pH (a measurement of acidity) and there are other, vehicle-mounted prototype instruments for conducting soil analysis in the field, but these need to be calibrated to differing soil types. For these reasons, PAA is looking towards instruments that use near- and mid-infrared soil analysis techniques as those sorts of instruments can now be operated under field conditions.

Currently a significant constraint is amount of manual ‘data wrangling’ involved in turning raw data into robust and usable information that in turn enables farmers and land managers to make decisions. We are also conscious that there a many ‘hidden gems’ in the data we collect, so data mining techniques are of particular interest to us. The interpretation of geophysical data at difference scales and in different regions is an area where we are constantly learning.

Sometimes we have a situation where there is really solid data and good interpretation and the barrier is in being able to implement a solution. This is often the case in irrigated agriculture and horticulture where there is fixed irrigation infrastructure and the costs of renovating it to enable variable rate irrigation are prohibitive. In these cases there are often still gains to be made from using weather, soil moisture and crop growth data to optimise irrigation decisions. In many cases where water is cheap, growers are happy to overwater to ensure even crop growth and ripening, factors which far outweigh the costs of irrigation. But we are increasingly seeing growers of lower value irrigated crops (e.g. dairy pastures) wanting to optimise their irrigation due to the energy costs of pumping the water. So you can see that there is a whole raft of inter-related economic factors that influence the decision to adopt precision agriculture technology.

The physical and the biological aspects of soil management are areas that are possibly under-represented in terms of our understanding of how they vary throughout the landscape. Soil compaction, for example, is a sleeping issue, and only recently are people coming to realise the severity of subsoil compaction across the WA wheatbelt. ‘Decompaction’ allows crops roots to explore the full profile for moisture and nutrients, and in doing to so, effectively recharge the subsoil with carbon and its associated biology. Effectively it gives earthworms, termites and other soil fauna a much better chance to actively mix and change soil structure. Once these foundations are right, you can start to add the chemistry on top to ensure nutrients, pH and associated aspects are correct. To a certain extent, as an industry we’ve been blind-sided by these physical and biological issues in our focus on soil nutrition and chemistry.

To a certain extent, the role that microbiology plays in soil management is not well-understood by the broader industry, and is sometimes not given due consideration in pesticide application decisions. This may be due to soil biology being ‘out of sight out mind’, and the fact that soil biology cycles tend to be longer term and growers are, for good reason, focussed on the here-and-now of crop management decisions.

There has been some excellent local research into soil biology, particularly in relation to soil fungi and nitrogen fixing bacteria, as discussed in previous FDI feature interviews. Another area where research is needed relates to soil fertility in WA in terms of the best analytical methods to determine the optimum levels of potassium relative to soil type, and how the soil’s natural supply of potassium becomes available to the plant in different soil types.

FDI: What needs to be done to meet these shortfalls?

FD: Education and training are needed for consultants, farmers and land managers about the practical aspects of implementing precision agriculture and how they can benefit from its applications. Also, across WA, for example, there is an incredible diversity of climate, landform and geology which means that the decision rules for VRT vary enormously depending where you are and what you’re growing. Growers and consultants in WA should seek to understand the aspects of soil management most important to them, as well as about soil variations and changes in inputs to manage those soils and their crops and pastures. This knowledge will build their understanding on how all of those factors work, thus enabling them to be more confident about their decisions to use VRT for fertilisers, soil ameliorants and even herbicides, pesticides and fungicides.

From a technical perspective, there is also a need for a degree of training that relates to computers, digitisation, electronics, interpreting data as well as data management software that is incorporated in delivering precision agriculture. Making it easier to access and interpret spatial data to help in decision making is certainly a priority, with industry pundits now calling on a movement from precision agriculture to ‘decision agriculture’. Good practitioners of precision agriculture have always known that it’s about confidence in making the right decision. I guess they’re just trying to make a point that having lots of data is useless unless you know how to make a decision from it.

FDI: What is the future for data collection services in the agricultural sector? What are the emerging technologies that might have an impact?

FD: New sensors are becoming available that will enable infrared analysis of soil which is exciting. Miniaturisation of analytical instruments through a technology known as microelectromechanical systems (or MEMS for short) is showing a lot of promise for environmental sensing. New satellites are collecting visual data of the Earth’s surface at greater time frequency and at high resolution, providing very high-quality images which aid in crop assessment and pasture management. The satellites are also capable of taking images across different spectral bands, such as near-infrared and short-wave infrared which can detect moisture, temperature and crop stress.

Un-manned Aerial Vehicles (UAVs) are platforms where associated measuring instruments and cameras can be mounted to collect high-resolution images. These machines are in increasing use in farming and land management. UAVs can also be fitted with devices that measure surface temperatures, or the temperature in a crop canopy, as an indicator of whether a crop is stressed through a lack of water, disease or insect infestation. UAV are affordable and can be easily operated by one person.

figure-1-uav-in-agriculture

The Internet of Things (IoT) is becoming increasingly important to farming practice. Applied to precision agriculture, IoT can link field located devices that collect data in real time to be included in decision-support systems available to farmers and contractors. Examples are small, cheap, WiFi capable devices that use telemetry to transmit monitoring data such as soil moisture, water usage and soil chemistry to determine fertiliser requirements.

However, it is the ability to process and analyse all this data that really lies at the heart of making it useful to farmers. A lot of the work being done needs to ‘get out of the lab’ so to speak to really understand how it can be applied in real-world situations. This is what Precision Agronomics specialises in, taking the latest research and technology and applying it to real-world agricultural situations that can help farmers be more profitable, reduce their risk and reduce their impact on the environment.

*****

About the Interviewee:

Frank D’Emden works at Precision Agronomics Australia. Frank’s career in agriculture began at an early age working as a farmhand on a broadacre farm in the Esperance district. After graduating with a Science degree with honours in Natural Resource Management in 2000 and a Masters of Agriculture Science in 2005 (both from the University of Western Australia), Frank worked for the Department of Agriculture and Food WA from 2006 to 2009, joining Precision Agronomics Australia in October 2009.

Frank’s role as Technology Development Manager involves undertaking research and development in all aspects of precision agriculture, with a focus on using electromagnetics and gamma radiation data to create high-resolution soil and prescription maps for variable rate applications. Frank has also led a number of R&D projects in conjunction with government agencies and grower-funded organisations.

Corruption, Sand Mafias and Water Security in India

Benjamin Walsh, Research Analyst, Global Food and Water Crises Research Programme

Background

Indian Prime Minister Narendra Modi has called for a new Chicago to be built every year. Cities require lots of buildings, which is why sand mining, mostly illegal, has exploded across the country as people seek to supply sand to construction firms. The construction boom, however, has revealed an absence of any effective institutional safeguards to govern sand mining. Sand is vital in ensuring India’s groundwater supply, but state government corruption and poorly enforced regulations has allowed “sand mafias”, illegal miners who mine and transport sand, to thrive and threaten a commodity necessary in securing India’s groundwater supply.

Comment

In India, politics and current environmental laws feed the corruption and illegal sand mining operations contributing to the country’s water woes. Running a reputable business in India is extremely hard to do. According to the New York Timesthe existing legal system pushes anybody looking to construct something to issue bribes to the necessary officials. The World Bank ranks India 130 out of 190 countries for ease of doing business and an awful 185 out of 190 for one’s ability to acquire a construction permit. A possible reason for these figures is that the acquisition of a mining commission does not put legal miners ahead of illegal miners. Permit applicants must wait for long periods of time for applications to be processed and even when a permit is acquired, economically speaking, the business sees no immediate benefits. Commissioned miners must pay for land reconstruction and are not able to expand the size and depth of their mandated operation. It is easier, and more cost effective, to pay a number of bribes up front to politicians, engineers and managers and carry on as an illegal miner.

Such a system is a natural breeding ground for corrupt officials. Politicians have been reluctant to take any concrete measures that address India’s worsening sand, and by extension, its groundwater, situation. Sand mining is a policy area critical to mitigating India’s worsening groundwater situation, but, politicians, un-interested in the implementation of laws, and institutions to back them up, prefer to ignore the policy problem and make money from it instead. The fortunes to be made through bribery, graft and the facilitation of illegal sand mafias in the sand industry are more lucrative than if they were to implement policies that saved sand. Without officials to reign in miners, who apparently do not care about the health of the rivers they mine, the quality and quantity of ground and surface water, as well as the institutional security, of India, is being seriously threatened.

Sand is vital in securing India’s most critical water resource: ground water. India is the largest user of groundwater with more than 60 per cent of ground water being used for agriculture and 85 per cent for drinking purposes. Many argue that falling groundwater supplies are due to cheap drilling technologies and the Green Revolution, but sand mining is equally to blame. In the Indian state of Kerala, a river in the village of Manimala has suffered the effects of sand mining. The river running through the village has all but dried up. Sand is a vital component of aquifers, as it acts as a material capable of storing water for later extraction. Without sand, the water table (the upper-most layer of the aquifer) drops. Furthermore, rainfall is not absorbed and rivers do not fill up. Wells run dry and eventually deeper wells that are sunk looking for remaining water fare much the same. Without sand, water cannot be captured. Some have even likened sand mining to water theft. To recharge groundwater levels, sand is a necessary ingredient. Sand acts as a link between the flowing river and the water table and provides structure to the flow of the river and the riverbanks.

Overall, the mining of sand by criminal gangs shows how corruption and ineffective institutions are obstacles to water security that cannot be ignored. India is a helpful reminder that to be water secure, one must be politically secure first.

Carbon Sequestration – Why and How?

Christopher Johns, Research Manager, Northern Australia and Land Care Research Programme | Download PDF Carbon Sequestration

Key Points Carbon Sequestration

  • To achieve the global warming targets set by the Paris climate change conference it may be necessary to actively remove and store greenhouse gas currently in the atmosphere.
  • The capture and storage of carbon will be key to reducing future greenhouse gas emissions.
  • The storage of greenhouse gas underground is a promising solution but there are still capability gaps to be filled before its large-scale implementation can be achieved.
  • Large quantities of greenhouse gas can be stored in the oceans but the cost may be prohibitive and environmental consequences are unknown.
  • Restoring and improving our agricultural soils will permanently sequester carbon and improve soil health and productivity.

Summary

There exists a strong and growing body of scientific research evidence that supports the belief that to achieve the targets set by the Paris climate change conference, greenhouse gas must be actively removed from the atmosphere and stored. Carbon capture and storage technology will also have a key role in reducing future greenhouse gas emissions. There is a range of storage options. Storage underground is technologically and financially feasible but gaps in capability still exit and implementation time may be significant. It may be possible to store large volumes of carbon in the ocean; however, this will require very large sums of capital investment in infrastructure and may have unforeseen, adverse environmental consequences. Carbon sequestration from revegetation and plantation programs can provide a significant but shorter-term contribution to atmospheric greenhouse gas reduction. Actively increasing soil carbon can make a significant contribution to the reduction of greenhouse gases currently in the atmosphere while improving the quality and productivity of our agricultural soils.

Analysis

In a recent address to the Royal Society, Lord Nicholas Stern, the former Chief Economist to the World Bank and current chair of the Grantham Research Institute on Climate Change and the Environment at the London School of Economics, stressed the importance of ‘negative greenhouse gas (GHG) emissions’ if global temperatures are to be stabilised. Negative emissions, put very simply, are the active removal of GHG currently in the atmosphere because of the burning of fossil fuels and other human activities. Many scientists believe that the reduction of future GHG emissions alone will not achieve the target set by the Paris climate change conference of restricting global warming to 2 degrees above pre-industrial levels. The quantities of GHG already in the atmosphere are beyond that point and we must, therefore, lower that amount. Negative emissions are also an important mitigation, as some industrial sectors are unlikely to achieve the appropriate emission reductions in the short or medium term and compensatory reductions will be required in other areas.

Lord Stern also assesses that carbon capture and storage or sequestration technology will be key to stabilising global temperature. This is the process of capturing waste carbon dioxide (CO2) from large sources, such as fossil fuel power plants, transporting it to a storage site, and depositing it where it will not enter the atmosphere. Ten countries, collectively responsible for approximately one third of global annual emissions, explicitly refer to carbon capture and storage in their ‘intended nationally determined contributions’ submitted prior to the Paris Climate Change Conference.

Carbon Sequestration
Fig 1. The Canadian Boundary Dam Power Station using carbon capture and storage technology. Source: Saskpower.com.

To be successful, both negative emissions and carbon capture and storage rely on the long-term or permanent storage of GHG. The options for long term storage on the scale necessary to meet global warming target, however, are limited. This paper provides an outline of some of the options and some analysis of their feasibility.

Carbon sequestration is the general term used for the capture and long-term storage of carbon dioxide. Capture can occur at the point of emission (e.g. from power plants) or through natural processes (such as photosynthesis), which remove carbon dioxide from the earth’s atmosphere and which can be enhanced by appropriate management practices. Sequestration methods include:

  • storing carbon in underground geological formations (geosequestration);
  • subjecting carbon to chemical reactions to form inorganic carbonates (mineral carbonation).
  • storing carbon in the ocean (ocean sequestration);
  • enhancing the storage of carbon in forests and other vegetation (plant sequestration); and
  • restoring and enhancing the storage of carbon in soil (soil sequestration).

Geosequestration

Geosequestration is the injection and storage of greenhouse gases underground. The most suitable sites are deep geological formations, such as depleted oil and natural gas fields, or deep natural reservoirs filled with salt water referred to as saline aquifers. Geosequestration is part of the three-component scheme of carbon capture and storage (CCS), which involves:

  • capture of CO2 either before or after combustion of the fuel
  • transport of the captured COto the site of storage, and
  • injection and storage of the CO2.

This scheme seeks to reduce to near-zero the greenhouse gas emissions of fossil fuel burning in power generation and CO2 production from other industrial processes such as cement manufacturing and purification of natural gas. It is predominantly aimed at mitigating emissions of CO2, but geosequestration may also prove to be applicable to other greenhouse gases, particularly methane. The concept of CCS may also be applied to other long-term storage options (see ocean sequestration and mineral sequestration below). Of the storage options, however, geosequestration is thought to be the most promising due to higher confidence in the longevity of storage, the large capacity of potential storage sites and a generally greater understanding of the mechanisms of storage.

Mineral sequestration

Mineral sequestration (sometimes referred to as mineral carbonation) involves reaction of CO2 with metal oxides that are present in common, naturally occurring silicate rocks. This process mimics natural weathering phenomena, and results in natural carbonate products that are stable on a geological time scale. There are sufficient reserves of magnesium and calcium silicate deposits to fix the CO2 that could be produced from all fossil fuel resources. Though the weathering of CO2 into carbonates does not require energy, the natural reaction is slow; hence as a storage option the process must be greatly accelerated through energy-intensive preparation of the reactants. The technology is still in the development stage and is not yet ready for implementation; however, studies indicate that a power plant that captures CO2 and employs mineral carbonation would need 60 to 180 per cent more energy than an equivalent power plant without the capture and conversion process.

Ocean Sequestration

A carbon sink is anything that absorbs more carbon than it releases as CO2. Forests, wetlands, soils and the ocean are the most important natural carbon sinks. The ocean represents the largest carbon store on earth. Prior to the industrial revolution, it contained 60 times as much carbon as the atmosphere and 20 times as much carbon as the land vegetation and soil. The ocean has been a significant sink for manmade CO2 emissions of similar magnitude to the land sink but, as with the land sink, the ocean sink will decrease in strength. Increasing CO2 concentration in the upper layer of the oceans is also causing ocean acidification with potentially severe consequences for marine organisms and ecosystems. CO2 dissolves in seawater by combining with carbonate ions, but the number of these ions is limited and, as their concentration decreases, this will limit the rate at which CO2 is taken up by the ocean. A possible slow-down in ocean circulation may also reduce the ocean sink capacity. In addition to the dissolution process, plant plankton in the surface layers perform photosynthesis and incorporates CO2 into biological material but, as with terrestrial photosynthesis, there comes a saturation point where other factors restrict further photosynthesis.

It has been proposed to bypass the natural ocean CO2 uptake mechanism and inject CO2 directly into the deep ocean to utilise its enormous storage capacity. Models suggest that CO2 injected into the deep ocean would remain isolated from the atmosphere for several centuries, but on the millennial time scale it would recycle into the atmosphere. Considerable uncertainties exist in our understanding of deep ocean chemistry and biology and the potential adverse impacts on ocean ecosystems. In addition, despite many years of theoretical work and small-scale experiments, the feasibility of ocean storage has not been demonstrated and the technologies for deep ocean CO2 transport and dispersal are yet to be developed.

Another possible way to enhance the ocean carbon sink that has been proposed involves large scale ocean fertilisation with iron to stimulate phytoplankton (microscopic marine plants) growth and photosynthesis. This is one of several ambitious geo-engineering schemes that involve high uncertainty and risk but may provide quick and effective means to halt or significantly slow the rate of climate change.

Plant Sequestration

Plants use the energy of sunlight to convert CO2 from the atmosphere to carbohydrates for their growth and maintenance, via the process of photosynthesis. Natural terrestrial biological sinks for CO2 already sequester about one third of CO2 emissions from fossil fuel combustion. These natural sinks are a transient response to higher atmospheric CO2 concentration, which enhances the rate of photosynthesis. The uptake of CO2 by vegetation will decrease with time as plants grow to their full capacity and become limited by other resources such as nutrients, and regrowth potential in previously cleared or sparsely vegetated areas is fulfilled. Biological storage could be enhanced through agricultural and forestry practices and revegetation, but the capacity is limited and longevity of storage depends on the final fate of the timber or plant material. Carbon sequestration from revegetation and plantation programs, however, could provide a significant shorter-term contribution to climate change mitigation.

Soil Sequestration

It is estimated that soils contain between 700 gigatonnes (Gt, 109 tonnes) and 3000 Gt of carbon, or more than three times the amount of carbon stored in the atmosphere as CO2. Most agricultural soils, however, have lost 50 to 70 per cent of the original soil organic carbon pool that was present in the natural ecosystem prior to clearing and cultivation. When forests are converted to agricultural land, the soil carbon content decreases. This happens because organic matter in the soil decomposes following the disturbance while, at the same time, less carbon enters the soil because the clearance has reduced the biomass above ground, and practices such as stubble burning will reduce it even more. Agricultural usages such as grazing, harvesting and tillage also tend to reduce soil carbon, as does increased erosion that often results.

Given the enormous carbon storage capacity of soils, it has been suggested that with appropriate changes in management practices, soil could represent a significant sink for atmospheric CO2. Managing agricultural soils to increase their organic carbon content can also improve soil health and productivity by adding essential nutrients and increasing water-holding capacity.

Management practices that can retain or increase the carbon content of soils include low-tillage or no tillage, use of manures and compost, conversion of monoculture systems to diverse systems, crop rotations and winter cover crop, and establishing perennial vegetation on steep slopes. These practices primarily affect the amount of labile carbon in the soil, or carbon with relatively high turnover time (less than 5 years). Labile carbon is released to the atmosphere as carbon dioxide through decomposition and microbial activity. The potential increase in storage through such methods is limited by soil type, which determines the carbon-holding capacity, and climate, which determines the rate of decomposition. Soil microbial activity increases with soil moisture and temperature, and increasing average temperatures due to climate change may be expected to increase the turnover rate of labile carbon in soils.

An alternative and promising approach, which is the subject of much current research, is the use of ‘biochar’ to increase the soil carbon sink. Biochar is a type of charcoal that results from heating organic materials such as crop residue, wood chips, municipal waste or manure in an oxygen-limited environment (a process known as ‘pyrolysis’). This can occur in a dedicated facility that harnesses the resultant ‘bioenergy’ to produce electricity, and the biochar residue can be returned to the soil. As a more generally applicable process, biochar can be produced through replacement of conventional slash and burn practices with ‘slash and char’, where complete burning is inhibited, for example by dampening the fire with earth. Biochar is chemically stable and the carbon can remain in the soil for hundreds to thousands of years.

The properties of biochar will differ depending on the source of material used in its production and the conditions of pyrolysis. For example, different feedstocks (manure, wood waste, etc) will result in different nutrient levels and chemical stability of the resulting biochar. Different pyrolysis temperatures will affect the capacity of the biochar to adsorb or mop up toxic substances and help to rehabilitate contaminated sites, or to increase the water holding capacity of the soil.

The net agronomic benefits of biochar are still being investigated. Biochar production removes agricultural waste that may otherwise be returned to the soil as labile carbon and returns it instead as biochar. The relative impacts of this process are not yet well understood, but it is thought that biochar has the potential to significantly increase crop yields and improve soil health.

Conclusion

It is unlikely and, indeed, possibly unadvisable that only one carbon sequestration solution be employed. A varied and tailored approach is likely to produce the most productive and the most resilient overall solution. Highly technological solutions may be the most appropriate actions in some circumstances, these solutions, however, are still evolving and can be expensive, impractical and may have unforeseen consequences. Under-researched or solutions with unknow environmental impacts, obviously should be avoided when effective and safer options exist. The restoration and improvement of soil carbon in global agricultural regions presents a possible double positive solution. It has the potential to reduce and store GHG and improve our ability to feed the growing would population.

In Memoriam: Helmut Schmidt (1918 – 2015), German Chancellor 1974-82

Yesterday, on November 10th 2015, Germany mourned the loss of former Chancellor and – in later years – national icon Helmut Schmidt.

A lieutenant in World War II and economist by trade, Helmut Schmidt first gained national fame when managing the 1962 flood in his native Hamburg as a local politician and rose to be the Minister for Defence, for Economics and for Finance consecutively throughout the chancellorship of fellow Social Democrat Willy Brandt. Schmidt succeeded Brandt as chancellor of West Germany in 1974 and led Germany as the head of a coalition of social democrats (SPD) and liberals (FDP) through the oil crisis of the 1970s as well as the high time of left-wing terrorism in Germany. When his coalition with the liberal party finally broke in 1982, he briefly led a minority government, a time in which he as well headed the Ministry of Foreign Affairs, making him the politician that has led the most federal ministries in the countries’ history.

Having coined the phrase “People with visions need to go see a doctor” Schmidt was fiercely set in Realpolitik and making the best of what is possible at any given point in time. This lead him to not negotiate with terrorists in the 1970s and to supporting the deployment of middle-range nuclear weapons in Europe should the Soviet Union not disarm in the 1980s. Together with French President Valéry Giscard d’Estaing he is counted as one of the fathers of the world economic summits and signatory of the Helsinki Accords to create the CSCE, precursor to today’s OSCE and took first steps to creating a shared European currency.

Following his defeat in a vote of no confidence in 1982, Schmidt was succeeded by Christian Democratic Chancellor Helmut Kohl who was to lead a then-unified Germany until 1998.

After his tenure as Chancellor, Mr. Schmidt became co-publisher of the weekly DIE ZEIT and prolific political author with more than 30 books and numerous other publications to his name, the last one being “Was ich noch sagen wollte” (transl.: Things that I still needed to say) published earlier this year.

Germany lost a great political figure, the last and most visible of a generation of politicians that were born before the first German republic was, that fought as soldiers in World War II – and that were so very important in rebuilding the country and democratizing it in the decades after the war.

Helmut Schmidt is often credited with ‘keeping the ship afloat through treacherous waters’ which certainly is true, his stances and fights for what he believed to be the right thing to do, be it a hard line against terrorism (leading to the death of Hanns Martin Schleyer, President of the employers’ association who had been taken hostage in 1977) or the need to keep up the pressure in the arms-race between the USA/NATO and the USSR in the 1980s are equally as impressive, and part of that legacy.

Helmut Schmidt at a party convention in Munich, 1982

To many, especially us younger folk, Helmut Schmidt has been and will continue to be a legendary man and a character from a time beyond our imagination. Ten years older than most of our grandparents, all we had to go and make a judgement from were indeed epic tales of a young republic; there was the legend of the man who, as part of the government of the city state of Hamburg told off his superior, the mayor of the city, claiming ‘I got this’ when crisis-managing  the flood of 1962 and who took command of local army forces without having even the slightest bit of authority to do so; there was the legend of the brilliant rhetorician, his parliamentary battles with the oppositional Franz-Joseph Strauß were oratory events, – and then there was the legend of the man who could do no wrong, having led three different ministries before taking office and having had numerous other political positions before even that, Schmidt seemed to be a master of all he attempted.

Steeped in knowledge and classical education of morality and philosophy, an able pianist, chess player, loyal husband for 58 years to his partner Hannelore, fierce defender or peace, even if that meant arming oneself, Helmut Schmidt seemed to us like the closest thing we had in a long time to the all-capable, all-wise philosopher king of Plato’s lore.

We will never know how the world and the Germans would have remembered him if he only had lived for five years after leaving office. Having played a significant part in creating the Green movement in the late 70s/early 80s, leading his governments as a former officer would, shy of emotion and with no lack of directness, the grievances after a political career like his were many.

Alas, he didn’t die young. To us, he became the preeminent example of a public voice. and wise elder statesman. Fluent in English and with long-standing political friendships across nations and continents (France’s Valéry Giscard d’Estaing was one of his closest allies and the US-American Henry Kissinger went on the record saying that he wished to pre-decease Helmut Schmidt because he would not want to live in a world without him), Schmidt was a European and an arbiter of international cooperation.

While keeping out of day-to-day politics, he said what no one else would dare to say, would explain the world to us, his children, and tell us what and whom to be aware of.

And we? We adored him, we admired him, we forgave him his failings faster than they could come to light, and we gave him every bit of attention he could bear, he who would always have one more last bit of wisdom he needed to impart on us.

For most of us, Helmut Schmidt was a foil, a foil to be inspired or at times annoyed by. Especially for us leftist people, he was a living statue that somehow hadn’t been made into a statue yet. We would always feel safest when he was in sight, but we didn’t have much urge to look and listen too closely. When we did though, he would fulfil every expectation of grandness and wisdom because he actually was that good.

Germany and the world have lost one of its wisest and most capable politicians of the last century, in office and beyond, and both are a lesser place for it.

Helmut Schmidt 23.12.1918 – 10.11.2015

Thank you.

Helmut Schmidt and Jean-Claude Trichet discuss the European Crisis in 2013

Deutsche Welle showcases Schmidt’s life on the occasion of his 90th birthday