مهار هیولا: بومی سازی فرهنگی فن آوری های نوین
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|18033||2006||16 صفحه PDF||سفارش دهید||7455 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Technology in Society, Volume 28, Issue 4, November 2006, Pages 489–504
Central to public discomfort about new technologies is the notion that they are unnatural. Experts often suppose that better knowledge of technology and risks would help overcome public aversion. This assumption turns out to be fairly fruitless, often even increasing social polarization. The pattern of diverging risk assessments about technology might be improved by a better understanding of the moral gut feelings at stake. However, current technology ethics does not seem to be equipped for elaborating theories to explain public discomfort. Either public fear is not taken seriously, or ethical–theoretical rationalizations of moral intuitions lead to unsatisfactory, naturalist constructions, such as the intrinsic value of nature. For a better understanding of current risk controversies, a detour is made to the cultural anthropology of Mary Douglas on pre-modern ideas regarding danger. This offers some clarifying insights into modern perceptions of technological risks. Departing from anthropological observations, a so-called monster theory is sketched, which gives an explanation for the fascination with and aversion towards new technology, leaving aside ‘naturalist’ and ‘nature-skeptic’ explanations of technology ethics. Monster theory offers a point of departure for a new, pragmatic approach to controversies about new technology, the approach being named a pragmatist monster-ethics. It tells us we have to reflect on and shift cultural categories as well as to adapt technologies in order to domesticate our technological ‘monsters’.
During the Christmas holidays of 2002, an American company called Clonaid claimed the birth of the first cloned human baby. Before any scientific proof of this media-provoking news was offered (in the end the baby never appeared), opinions exploded from newspapers, chat sites and broadcasting stations all over the world. Apart from the skepticism about the news itself, the media coverage enflamed the ongoing controversy concerning cloning and ‘designer babies’. While disapproval and disgust about the manufacture of babies dominated, together with calls for a ban, others continued to tell us that cloning promises great perspectives, particularly therapeutic cloning in the interests of medical science. The commotion about the alleged cloned baby does not represent a solitary case. Public and expert reactions to new, evocative technologies actually show a steady and persistent historical pattern. Whether it is nuclear energy, plastics, steam engines, GM food, xenotransplantation or nanotechnology, time after time public discussion remains stuck in a groove. More exactly, public discussion is stuck in two worn-out grooves, one of salvation and fascination, the other of doom and abhorrence. Indeed, it seems that this ‘utopia-dystopia syndrome’  shapes initial public judgment. However, the syndrome does not appear in all cases of new technology. Useful innovations such as fiberglass cables, a new type of wheelchair or a technique for storing heat slipped into use without being exposed to suspicion of special, unknown risks, or of wide-ranging forecasts of human welfare. But as soon as the cloned sheep Dolly was presented in 1997, opponents hastened to declare that fundamental, natural boundaries had been crossed, while proponents were sketching the limitless frontiers that could be opened up by this kind of experiment on animals. In this recurring pattern two aspects catch the eye. First, it seems that different risk perceptions in the technology debate are linked to different ways of appreciating the unnaturalness of technologies. The fact that technology oversteps natural boundaries is regarded as having either very positive or very negative value. Secondly, the controversy is often portrayed as a conflict between emotion and reason. In 2000 for example, Greenpeace provoked the Dutch public with large roadside billboards, suggesting that an American genetic company had them posted with the message (translated from Dutch): “Your lettuce stays fresh because we put rat genes in it. Enjoy your meal!” ( Fig. 1). Full-size image (52 K) Fig. 1. Figure options In a recent lecture to an audience of Shell managers, Rudy Kousbroek—a well-known Dutch writer, cynically criticized Greenpeace's campaign: “They even gave up trying to assert something sensible. They do nothing more than speculate on the public's ignorance, their only target being to frighten people. The tragic thing is that this emotional language without argument doesn’t make the public and the media suspicious at all. It is alarming that the public does not automatically choose the side of those who appeal to verifiable facts and data” . Thus, it seems, we should welcome the increase in official efforts to grapple with public polarization. At present, in fact, we are being bombarded by a number of attempts at steering, from governmental bodies, to raise the quality of public debate on technology. In the last few years these attempts have resulted in large-scale information campaigns and carefully orchestrated public discussion. But so far they have not been very successful in avoiding intense public disquiet about new technology. We have seen various examples of this in the Netherlands. In 2001 the national government launched a broad discussion on GM food, the ‘Public Debate on Food and Genes’, which included voices from many social groups. When the Dutch Minister of Agriculture and Fisheries, Mr. Brinkhorst, announced the debate, he declared: “We should avoid thinking in fixed patterns and predictable positions.” Instead, we needed “… an objectifying social debate, aiming at the development of knowledge, by using adequate debating methods” . The organizing committee was aware of social polarization. With the disappointing experience of the Dutch ‘Broad Social Debate’ on nuclear energy in the mid-1980s still clear in the mind, the committee tried to avoid escalating emotions inside and outside the discussion rooms. Unhappily, despite these sincere intentions, these efforts also failed. Shortly after its start, most environmental NGOs withdrew from the round table, declaring that the steering committee's information about genetic food was not objective at all, that it had ruled out the most crucial questions and that the whole enterprise had the character of a governmental information campaign rather than an open debate. Therefore avoiding public polarization might be much more complicated than the organizing committee had expected. The grooves and their persistency are the pitfall. It would seem that for a more successful approach, we first of all need to have a better understanding of this persistency, its causes and its underlying emotions. What is at stake in the public discomfort and the euphoria about new technology? What mechanisms are at work in giving sense to new technology? What can experts tell us about this problem? To start with the last question I turn to two philosophers involved in the debate on biotechnology, both of whom have tried to explain public discomfort. Since their answers are quite unsatisfactory, I will sketch another approach to these questions—one that I have called the ‘monster theory’—in the rest of the article.
نتیجه گیری انگلیسی
As a result, this review of hybrids and cyborgs does not seem to add anything substantial to the monster concept or to the concept of pragmatist assimilation of monsters. Nevertheless, let me draw some conclusions on monster assimilation and its challenges for technology policy. First of all, monster theory represents an analytical instrument for studying and explaining risk controversies and their moral dilemmas, since it enables us to articulate the accompanying cultural dimension of strong intuitions. This analysis should be directed at making ambiguities explicit at the cultural level. In this way, monster theory offers a contribution to descriptive ethics. Further, explicitation of cultural assumptions might make different risk repertoires of opposed views more accessible to one another. Monster ethics might even facilitate the anticipation of future monsters. Nowadays, ethicists of technology often start their analyzing and judging activities when moral dilemmas and social deadlocks have already presented themselves . In contrast, a more pragmatist approach enables us to take a more proactive stance towards world-shaping technology. The analysis of and reflection on cultural assumptions will help to anticipate moral controversies and to assimilate future monsters at an earlier stage. This kind of cultural analysis and anticipation is a necessary step towards a second, more vital opportunity to elaborate technology policy. Analysis of cultural categories will uncover opportunities for enlarging the margins for action. This may encourage activities of pragmatic mediation, aiming at developing interventions in deadlocked debates, so that we prevent the two historical grooves from attaining their full, fruitless depth. Intervention is possible at the level of cultural categories, by way of shifts in those categories, or by way of shaping new concepts for interpreting anew the phenomena experienced as monsters. A pragmatist monster ethics means that we have to develop, renew, and differentiate our cultural categories as well as our technologies, so as to have them fit into a new order.