Genetically Modified Organisms

“The full climate change mitigation potential of industrial biotechnology ranges between 1 billion and 2.5 billion tCO2e per year by 2030, compared with a scenario in which no industrial biotechnology.”

 (Source: WWF, Copenhagen, (2009) Industrial Biotechnology: More than Green Fuel in a Dirty Economy?p3, this report was written in collaboration with the enzymes producing company Novozymes)

40 years ago, when genetic modification techniques were just starting to be used, the scientists involved realized that they were dealing with a potentially dangerous practice. At that time, Paul Berg, an American biochemist at Stanford used tumor inducing viruses to infect E.coli bacteria. There was fear that the bacteria would be able to induce cancer in humans and escape from the laboratory. During the first conference at Asilomar, California, in 1973, these potential safety issues were addressed. The conference led to a voluntary moratorium on experiments with genetic modifications that had potential health hazards.[1] It also led to self-containment rules that still apply today, like the low pressure in laboratories which prevents microorganisms from escaping with the airflows. The moratorium resulted in a lot of media attention and led to the first public debates on GMOs. During these debates it was argued that not enough attention was being given to the societal and ethical consequences of the new techniques as compared to a risk analysis approach.

In 1975, the second Asilomar conference was held. This time government officials, lawyers and the press were invited to debate together with the scientists, to compensate for the criticisms generated in response to the former meeting. The researchers from the Molecular Biology Department later indicated that they were afraid to share their concerns with the public because they thought that their experiments might be forbidden by people who knew too little about their research. The Asilomar recommendations led to national institute of health (NIH) guidelines being developed through open hearings, thereby involving a broader part of society than before.[2] These guidelines lifted the voluntary moratorium and allowed the researchers to continue their activities. Large field tests were prohibited but more legislative restrictions were not applied by the U.S. congress.

After 1976, the picture changed significantly. Universities started to work together with commercial institutions and economic values entered the debate where previously they had been mainly scientific. The commercial interest in biotechnology kept on growing with more and more industrial applications found for the techniques. Together with these applications came new patenting regulations. The first living organism was patented in 1980; this was an oil eating bacterium. This was possible because the court ruled (in the case Diamond vs. Chakrabarty)  that a genetically modified organism is not a product of nature.[3] In that same year, the Bayh-Dole act was adopted by the US congress, allowing universities to patent publicly funded research and sell these patents to industry.[4] The safety issue gradually became sidelined by the commercial possibilities, and the experimental limitations were assessed only in terms of the direct physical risks. New debates were started questioning scientific integrity, the patentability of life and the role of the government.[5] Academic and hospital research ethics committees, comprising professionals in the life sciences, religion, law and philosophy, were centralized. National bioethics committees were also established, to consider ethical implications in biomedical research in general. Together with these committees regulations were put in place to manage the moral and ethical aspects of GM research.

Genetically modified crops started to enter the public sphere from the end of the 80’s. Following the deliberate release of agricultural GMOs, the debate started to become focused on environmental concerns. Starting in the US, but quickly spreading to Europe and beyond, the restrictions placed upon GMOs loosened, they began to be grown in agriculture and entered the supermarkets. In the U.S., society slowly started accepting the organisms. In 2010, 85% of the corn and more than 90% of the soya grown in the U.S. was GM.[6] An important difference between the U.S. and Europe is the scale of the farms. In the U.S., the industrialised farms use vast amounts of land, while there is still wild nature preserved in other parts of the country. In Europe, on the other hand, the relatively small farms are understood to be part of nature. This is one of the reasons why Europeans are more protective over their farmlands.[7] Negative media attention led to several European agricultural bans on the growing and importation of GM plants and people started to ban products that contained genetic modified material.[8] Supermarkets refused all products with GM ingredients and the food industry started to apply negative labelling: e.g. “This product contains no GMO’s.” These strong anti-GMO sentiments evoked disputes among the regulators and the experts who were assessing the risks.[9] NGOs, one of which was Greenpeace, started to link the environmental risks of GMOs to the sustainability discourse.

From the 90’s, the societal controversy also started to focus on the welfare of animals. This was particularly in response to gene therapy experiments and to the creation of transgenic animals (e.g. Herman the bull, in Leiden, The Netherlands) in which context the question of the integrity of animals came in to view. Genetically modified animals were seen as unnatural, and the technique of genetic modification was claimed by some to be morally wrong.[10] The debate around bioethics was even further amplified by the birth of the first cloned mammal, Dolly the sheep in 1996, and the corresponding questions over the possibilities of human cloning. More and more societal and ethical concerns entered the debate and it became less about an objective risk analysis than it had been previously. This also led to codes of conduct for companies, studies about the ethical aspects of gene technology and room for ethics in policy making.

While the debate around GMO’s continued over time, it gradually evolved into a discussion between two polarized camps. One reason for this situation is that the mass media more easily pick up the most passionate arguments, rather than a more nuanced but complex description.[11] With extensive media coverage it became clear that strong metaphors (e.g. Frankenstein food) and powerful rhetorics were starting to dominate the public debate, leaving the less exciting rational arguments behind. As the science develops, new experiments and new data further complicate the debate. In the field of biotechnology, the complexities occur at different levels. It becomes clear that not all information is stored in the genes and epigenetics is given a larger evolutionary contribution. There are also scientific and technological difficulties. For the layperson it is almost impossible to understand how all the techniques work and what the important issues are. Even on the highest scientific level there are still disputes about models.

Difficulties in models around anything ‘bio-based’ make it very hard to make reliable predictions about future scenarios. There are so many variables that it is impossible to put them all in one model. Various models predict various scenarios. A multidisciplinary approach is necessary to take all of the factors – the societal, the economic and the technical issues – that play a role in the implementation of the bio-based society (BBS) into account. When simplifying the models, essential parts will be lost. Although the general public and policy makers demand that the future be predictable, this is not in fact possible.[12] Scientific disputes provide the media with a means of popularizing a scientific story but at the same time add to the polarization of the societal debate.[13]

Another factor contributing to the polarization of the debate is the changing relationship between science and society. Society wants increasingly more in return for the money that is invested in scientific developments. Unconditional funding for basic science is disappearing and governments demand socially relevant knowledge. However, the question of relevance is not only very hard to define, it also changes over time. Scientific or academic knowledge is now often quantified by the number of publications attributable to a project, institution, or individual.[14] The number of publications over time (also called “the flux”) is increasing, and highly influences the choices that are made in research. It can for example be a reason to publish in an early developmental stage or split articles that would otherwise have been published together into smaller pieces.

 

[1] Devos 2008, 30

[2] Devos 2008, 35

[3] Described in Mildred et al. 1999

[4] See for more information Stevens 2004

[5] Devos 2008, 38

[6] see USDA 2011, Appendix 2

[7] Described by Lino and Birrer 2006

[8] See Hobbs and Plunkett 1999

[9] Devos 2008, 53

[10] De Vries 2006, 476

[11] Described in Mildred et al. 1999

[12] Described in Landeweerd, Surette and van Driel 2011

[13] Described in Maeseele 2008

[14] See Hessels, Van Lente and Smits 2008