Review of Anthony Pym, Exploring Translation Theories, Routledge 2010.

by Brian Mossop, York University School of Translation

 

This is a coursebook on Western[1] translation theory, mostly since 1950. It is not a review of the entire field of Translation Studies and it does not set out to provide a who-said-what chronology (for which Pym refers readers to other publications). Instead, the book tells a sort of story, based on what might be called Pym’s own theory about translation theory, which he summarizes as follows in the Preface: “All the theories respond to the one central problem: translation can be defined by equivalence, but there are many reasons why equivalence is not a stable concept. So how can we think about translation beyond equivalence?”

 

Pym explicitly leaves out many areas of TS because they do not fit into this story; excluded, for example, are empirical studies (using corpora, think-aloud, eye-tracking and keystroke recording), gender studies and postcolonial studies. Of these areas he writes: “There is certainly a lot of theorizing, but most of the concepts come from other disciplines and are applied to translation” (p 86). This restriction is rather provocative, as is the idea that Western translation theory is a series of reactions to equivalence. But the provocation gives the book originality and provides real food for thought. Perhaps such thought will elicit alternative explorations by those who would prefer to tell a different story.

 

The author describes his book as being “for academic work at advanced levels” and as “accompanying some of the best introductory works in the field”. This is accurate: Exploring would be too difficult for the average undergraduate because it does not present each individual theory in sufficient detail. Indeed the book is a rather dense—sometimes too dense—thicket of ideas and moves rapidly from one topic to another.

 

Pym groups the various theories under six ‘paradigms’ as he calls them. The equivalence paradigm is covered in chapters 2 and 3, and the other five paradigms in chapters 4 to 8. Pym defines a paradigm as “a set of principles that underlie several theories (in the general sense outlined by [philosopher of science Thomas] Kuhn” (p 3). Now on some interpretations of Kuhn, different paradigms are  incommensurable—the concepts of one are not ‘translatable’ into the concepts of another. However , what Pym suggests is something milder: theorists from the different paradigms simply lack interest in each other’s work, which he deplores.

 

Each chapter contains a point-form summary at the beginning, another summary at the end, further readings, suggested projects and activities, and an amusingly titled section  "Frequently Had Arguments", which lists objections to the paradigm discussed in that chapter, along with objections to the objections. The book also has a Facebook site for discussion, and there are further materials at the author’s Web site, including a list of errata and some notes for translators of the book (which clarify some of the passages which might have been more carefully written).

 

The following overview/commentary does not cover all the topics which Pym considers, and the space devoted to each chapter does not reflect its length or importance. Sometimes I summarize a large chunk of a chapter quite briefly, and sometimes I stop to insert my own commentary.

 

Chapter 2 begins the discussion of the equivalence paradigm. Someone who holds an equivalence theory believes that SL and TL expressions can have the same value in some respect such as form, function or reference. Pym suggests that this paradigm arose in the 1950s to explain the possibility of translation in reaction to the view that appeared to flow from structural linguistics, namely that pairs of words in different languages, such as French ‘mouton’ and English ‘sheep’, are simply not of equal value (the former means both the living animal and the meat taken from it, whereas the latter does not mean the meat), and therefore, translation is impossible.[2] The counterargument of equivalence theories is that the sameness of value does not apply to out-of-context items of the language system but to the way a word is being used at a given point in a particular text.

 

Pym makes the very interesting move of dividing equivalence theories into two kinds: theories of natural equivalence and theories of directional equivalence. In the first of these ‘sub-paradigms’, equivalents are seen as existing prior to the act of translation; they are discovered, not created, by the translator. So to translate the road sign slow into French, one asks (according to Vinay & Darbelnet) what word is used in France to make drivers slow down, and one translates with that word (not the adjective lent but rather the verb ralentir, slow down). Thus the source determines the translation.

 

Pym discusses at some length the various techniques for achieving equivalence described by Vinay & Darbelnet (transposition, modulation and so on). This is followed by brief discussions of Catford, Koller, Reiss and Seleskovitch. In the early days, discussion in this paradigm was limited to single words and short phrases, but natural equivalence thinking can also be seen in later discussions of equivalences between pragmatic discourse conventions or between modes of text organization. What ties all these writers together, despite their differences, is that they all see things in terms of equivalence to something or other in the source text; “There is no real consideration of translators having different aims (from the writer of the source)” (p.19).

 

In the section on criticisms of natural equivalence, Pym mentions that new information (that is, new to the TL-speaking society) cannot be natural; there will not be any already existing way of talking about the concepts in the source text if, for example, missionaries are introducing a new religion through translation. Another criticism, raised by Venuti, is that writing naturally in the TL promotes parochialism; readers in the TL-speaking society get the impression that the SL society is just like them.

 

The chapter concludes with an argument that the notion of pre-existing equivalence can only arise in the historical conditions of print culture and standard vernacular languages. He points out that before the Renaissance, different languages were not seen as having equal value. There was a hierarchy with several levels, ‘divine’ languages like Hebrew and Arabic at the top and local patois at the bottom. Translation was seen as a way of enriching a ‘lower’ language, which had no already available equivalents. Also, before printing, there were no stable texts to which the translation could be equivalent.

 

One thing I found strangely missing from this chapter was any mention of machine translation (though it comes up in suggested activity 4 at the end of the chapter). If one is talking about the source determining the translation, then surely the rule-based MT that was taking shape in the 1950s at the same time as equivalence theories appeared was a technological incarnation of this idea (just as translation memory can be seen as incarnating the Localization paradigm of Chapter 8). Nor is there any mention of that early translating machine, the bilingual dictionary, which in its shorter versions tempts translation students to think that  if word x appears in the SL text, then they should write word y in the translation.

 

Chapter 3 concerns the sub-paradigm of directional equivalence.  Natural equivalents are seen as existing prior to translation, so there is no directionality: slow can be translated as ralentir, and ralentir as slow. However suppose I decide to translate ‘Eton’ into German as ‘englische Eliteschule’ (English elite school). It is highly unlikely that anyone translating in the opposite direction would render ‘englische Eliteschule’ as ‘Eton’. The equivalence goes one way only; it does not exist prior to translation.

 

Pym suggests that natural equivalence is actually a bit of an illusion. The archetypal natural equivalents—SL/TL pairs of technical terms—are often the result of fiats by terminology standardization committees. One could, he claims, probably find a social history behind any SL/TL ‘natural’ pair: behind the pair English ‘Friday’/Spanish ‘viernes’ lies the spread of the 7-day week, so there was a directionality from languages of the Middle East (where the notion of the week originated) to others. This claim of Pym’s does seem a bit exaggerated; it’s not obvious what historical process would lie behind pairs like water/agua or blood/sangre.

 

The idea underlying directional equivalence theories is that translators actively create equivalence (rather than finding it ready-made) by choosing an approach that is usually expressed in some version of the literal versus free dichotomy. So both a literal and a free translation of a passage can be seen as equivalent to it; the source does not determine the translation. Pym goes over several versions of the dichotomy, by Cicero, Schleiermacher, Nida, Levy, House, Nord and Newmark.  He does however note that most of these writers state a definite preference. So Newmark often says that what he calls authoritative texts such as classic works of literature should be translated semantically (a version of literal) rather than communicatively (a version of free). And Nida of course famously advocates translating the Bible dynamically (a version of free) rather than formally (a version of literal).

 

Pym highlights the frequently recurring notion of a dichotomy: a translation must be either source-oriented (literal) or target-oriented (free). He points out that in practice, translators often decide to combine the two, some aspects of a text being handled more literally, others more freely.

 

Pym devotes a separate section to Gutt's relevance theory. He contrasts what Gutt calls direct and indirect translation, seeing them as yet another version of the literal versus free dichotomy. Suppose the source text reads "The back door is open" and suppose readers of the source text have access to the context "thieves can get in" but readers of the translation do not. A direct translation would be "the back door is open" (leaving readers to figure out the relevant context), whereas an indirect translation might be "We should close the back door" (which states a suggested action implied by the door being open when thieves are present). Pym says that there is a fundamental shift here from other versions of equivalence theory because Gutt is comparing the thought processes of readers of the source with the thought processes of readers of the translation.

 

Pym follows up on Gutt by giving his own view that equivalence is really a belief held by people who read translations. People who are reading something that is labelled a translation will presume that it is somehow or other equivalent to some text in another language. No linguistic comparison of individual passages from source and translation is needed to establish equivalence. Rather, as Pym has written elsewhere, professional translators simply need to avoid writing anything that might call the reader's presumption of equivalence into question, on pain of having their translations rejected.

 

This, I think, is why equivalence is such an important idea: it’s the theory of translation implicitly held by the average person, and thus the average paying client and the average reader of translations. Since readers and clients are key participants in translation, their beliefs should presumably be a part of translation theory, and thus a part of the mental framework of translators as they select their TL wordings.

 

Chapter 4 discusses the Purposes paradigm. These theories say that translations are  determined by the role they will play on the target side. The purpose of the translation is seen as independent from the purpose of the source text. Translations may have the same purpose as the source, but this is seen as a special case. Where the purpose is not the same, there is no equivalence. If purpose determines the wording of the translation, then the translation can contain outright additions (things not even implicit in ST) and subtractions (so that meanings are not even left implicit in the translation). Pym reviews the theories of Nord, Vermeer, Holz-Mänttäri, and Hönig & Kussmaul.

 

This paradigm is often called functionalism but, Pym notes, that label is confusing since it is sometimes used for theories in the equivalence paradigm that start from the function of the source text rather than the function of the translation.

 

Once purposes come into view, theory expands beyond text to include the people who have the purposes: the commissioner, the translation company, subject experts, editors, final readers, and translators themselves. The translator is now seen as having responsibilities to people, and may have to deal with conflicts between the wishes of the various parties. Theories differ as to who decides the purpose: the commissioner or the translator. In general, the translator is seen as the ultimate decider, though he or she may to varying degrees feel bound by specifications provided by the commissioner.

 

It is often not clear in this paradigm whether theorists are talking about how translations ought to be produced or how they actually are produced. There is a heavy prescriptive and pedagogical element in these writings since they arose in connection with the huge expansion of translator training institutes in the German-speaking world in the 1980s. Pym interestingly suggests that the paradigm provided a theoretical basis for separating such institutes from traditional language departments:  “As long as you are analyzing modes of equivalence to the source, you are doing linguistics of one kind or another. But if you have to choose between one purpose and another, linguistics will not be of much help to you. You are engaged in applied sociology, marketing, the ethics of communication…. The more radical versions of target-side functionalism justified the creation of a new academic discipline. They could remove translator training from the clutches of the more traditional language departments. Translation theory thus surreptitiously became a debate about academic power.” (p 49)

 

 

Chapter 5 concerns the Descriptions paradigm that appeared around 1980, at the same time as the Purposes paradigm. These theories start by observing the translation product in its target-culture setting and then attempt to explain why it is the way it is. The differences between a translation and its source are described in terms of various shifts, and the explanation for the shifts is then attempted in terms of such theoretical concepts as system, norm, universal and law. The paradigm thus goes beyond equivalence in that it does not care whether an item in the translation has equal value to some item in the source (Pym does not put it this way: he says that in this paradigm, all translations are automatically equivalent simply because they are thought to be translations in the target culture. This did not strike me as a very felicitous formulation, since ‘equivalent’ loses its meaning of ‘having equal value’ and simply becomes a synonym of ‘translation’ .) Notable theorists discussed are Toury, Holmes, Even-Zohar, Lefevere, Shlesinger and Chesterman.

 

Some theories in this paradigm look at the position of translations within the literary system of the target culture, explaining the way a text is translated by whether translations are peripheral or central in that culture. Other theories look at norms, which are common language practices—the ways people in the target culture at a particular time are likely to write because that is what is expected. A further explanation of why translations are the way they are is that there are universal tendencies operating whenever someone translates, such as the tendency to use the more common words of the target language or the tendency to explicitate. Finally, there are ‘laws’ such as Toury’s law of interference which says that interferences from the source text tend to be macrostructural (they concern things like text organization and paragraphing more then word and phrase level features) and that they are more tolerated when the source language/culture is highly prestigious and target culture is not.

 

The Descriptions paradigm, as Pym admits, does not really fit his view of translation theory as arising ultimately from differences of opinion about how to translate (p 2). The descriptive theories seem to have arisen out of standard scientific curiosity: what happens when people translate and why? Pym seems to have an aversion (which many readers, like me, will not share) to a science-based concept of translation theory where the scholar is an outside observer and aims for objective knowledge and explanation, whether cognitive or sociological. An interesting fact about the Descriptions paradigm is that it originated in the study of literary texts and thus does not fit the stereotype, prevalent in the English-speaking world, which associates the study of commercial/technical/ administrative translation with a scientific approach but the study of literary translation with a ‘humanities’ approach.

 

Chapter 6 concerns the Uncertainty paradigm, and it begins with the announcement that it “deals with some theories that can be difficult to understand”. Indeed it does! Insofar as one can generalize, Uncertainty theories seem to challenge equivalence by asserting that there is no way of knowing which of several incompatible translations of a source passage is correct (corresponds to what the source writer meant). All that is possible is for each translator to interpret the source from within his/her own mental frame. Another common view in this paradigm is that form and content are inextricably linked, so that content transfer (to another linguistic form) is not really possible.

 

The chapter differs from the others in that it is mostly concerned with the writings of philosophers. Some of the writing considered are post-1950 (Derrida, Quine, Heidegger) but most are earlier (Peirce) or much earlier (Locke, Augustine, Plato). Also considered are some literary figures such as Walter Benjamin and Umberto Eco, but only three TS writers (Berman, Arrojo and Kiraly). In short, a potpourri of thinkers who do not form any recognizable school of thought inside or outside TS. Some of them have a little to say about translation; most do not. Pym discusses them because they say things related to the uncertainty of meaning in all communication using language, but some readers may find themselves wondering why Uncertainty counts as a paradigm of the theory of translation.

 

Ideas such as inherent uncertainty of meaning and inseparability of form and content are unlikely to have much resonance among translators. For literary translators, inseparability of form and content typically leads to a recommendation for translations that are very literal and therefore hard to read and therefore not very attractive to publishers. Meanwhile non-literary translators, i.e. the overwhelming majority of the world’s translators, will find that the examples used in this paradigm are invariably literary (though in the broad sense that includes humanities texts generally), and that several of the theorists look down their noses at mere ‘business’ texts (Heidegger, Schleiermacher).

 

Now, non-literary translators are thoroughly familiar with uncertainty in the sense that, because they are often working in fields in which they are not experts, or because the source texts are often very badly written, meaning is frequently unclear. On the other hand, the solutions are well known: research or consultation with experts, including the author. In short, unlike in the Uncertainty paradigm, uncertainty about source-text meaning is seen pragmatically: either you resolve it or you don’t, in which case you fudge, guess or write a note to the client. In addition, translators who have had their work revised will hardly be surprised at the idea of discovering interpretations they had not thought of, or disagreeing about interpretation with the reviser. I imagine most practicing translators would simply say: yes, of course there’s some unresolvable uncertainty, but so what? Who ever said communication was perfect? Communication, whether using one or two languages, is a practical activity. There’s no need for perfect transference of meanings from one mind to another; something less than that will do (even though we might have to claim otherwise when advertising our services).

 

In this connection, Pym rightly says that “the many institutions where translators are trained have tended to take their lead from Europe and Canada, where translation is necessary to the workings of societies and indeterminacy is not especially what those societies want to know about” (p 133). He points out that uncertainty, especially in the form of Derrida-inspired deconstructive literary theory, has radiated out from the United States, a country which has never been much concerned with translator training and never been a major centre of TS.

 

The Uncertainty paradigm is valuable as a corrective to the false code theory of language, which allows meaning to move from mind to mind without any inferencing by the receiver. But how many people actually believe the code theory in practice? After all, everyone has had the experience of being misunderstood and of misunderstanding others. In short, the challenge to equivalence by the Uncertainty paradigm is surely a bit of a red herring: one doubts that any of the equivalence theorists believed in perfect transfer of meaning; at most they perhaps believed in the transfer of relevant meaning, e.g. the legal content of a law text, the scientific content of a science text, and so on. Pym himself points out that uncertainty is confined to certain passages of a text; there is no question of having to worry about whether any transfer at all is taking place.

 

Some of the writers reviewed revel in uncertainty; others despair at it; and yet others seek a way out of it. Pym seems to be torn between, on the one hand, an insistence that translation theorists should pay more attention than they do to uncertainty and, on the other hand, the So What? response which (tellingly!) he keeps coming back to throughout the chapter.

 

Chapter 7 is about the Localization paradigm. The only TS theorist in this paradigm appears to be Pym himself (see his book The Moving Text: Localization, translation and distribution, Benjamins 2004). He has drawn out the issues implicit in the practices of those who translate software and Web sites, showing that there is much more involved than simple purpose-driven adaptation to the various countries where software will be used or Web sites read. Specifically, there are two important aspects to the localization process: internationalization and non-linear text production.

 

Internationalization is the removal of culture-specific features from the source text. The result is an artificial text which has no readership other than the ‘localizers’ who then create versions in various languages starting from the international version. Pym  points out that there are precedents for translations that are produced in two steps: relay translation from language X to language Y (often English) to language Z; Bible translation from the Greek and Hebrew to English glosses and thence to various other languages, pre-translation editing to remove ambiguities. So the Localization paradigm provides a theoretical account of all these activities, not just of software/Web site translation.

 

As for non-linear text production, this arises from the fact that translating software, Help files and Web sites consists in translating additions to and modifications of older versions. For example, a sentence from an old Help file may be reused completely, or with a few words changed. As a result, translators are no longer working on a linear text but rather on isolated chunks of text where changes have been made. Translation memory software can produce a sort of pre-translation by bringing to the screen the TL versions of all the completely re-used sentences of the source text plus so-called fuzzy matches: TL sentences that were previously used to translate SL sentences that are similar to the SL sentence now being translated. The translator selects a fuzzy match and adjusts it. Even with completely new material, where no fuzzy match can be found, the translation memory tool will have a terminology function that imports TL terms into the pre-translation. The pre-translation thus consists of a mixture of SL and TL chunks, the latter coming from a variety of previous translation jobs done by a variety of translators. One or several translators then process the fuzzy matches and additions, and none of them may ever deal with the text as a whole; any editing of the final outcome may well be done by a non-translator.

 

Pym likens the pre-translations to the internationalized versions of texts: once again, translation is in two steps, this time from source to pre-translation to translation. He then asks what kind of equivalence is involved. The answer is, first, that the automated recycling of old translations takes us back to the small unit equivalences of the 1950s rather than text level equivalences, but more importantly he says that equivalence here is neither natural nor directional but artificial—words and phrases are replaced with bits of language that have been recycled from prior texts, prior situations. In The Moving Text, he points out that equivalence in software/Web page translation differs from (what he now calls) directional equivalence in that the background to the SL and TL texts is not separate SL and TL cultures but rather a single supranational ‘Microsoft’ techno-culture. Something similar has been noted by translators who work in large bureaucracies, where the relevant culture for internally circulating documents is a single ‘UN culture’ or ‘EU culture’.

 

Chapter 8 is about the Cultural Translation paradigm. Just as the Uncertainty paradigm originated outside TS in philosophy, so did this paradigm originate elsewhere, in Ethnography (the problem of how to describe—‘translate’—a colonial people for readers in the home country). More recently, this paradigm comes from English/Cultural Studies, where theorists consider what happens when people (rather than texts) migrate. Will they retain their source language/culture or assimilate to the new one (echoes of source- versus target-orientation)? In these theories, the answer is that there is often resistance to assimilation and a hybrid state is achieved. The theories move beyond equivalence because they concern a world that is no longer seen as consisting of separate watertight linguocultures between which equivalences might be established (people in country x with culture x speaking language x as opposed to people in country y with culture y speaking language y).

 

The main theoreticians discussed are, as in Chapter 6, not TS scholars: Bhabha, Spivak and Latour. It may seem that writers in this paradigm are simply using the word ‘translation’ metaphorically, since actual texts seem to have been left behind. However Pym sees the possibility of applying some of the ideas to translation, such as looking at things from the point of view of the translator (focus on people rather than texts) and seeing translation as speaking on behalf of others. Also, cultural translation theories encourage us to think of translation as a special case of cross-cultural communication, and Pym relates this to a number of earlier writers who have also suggested seeing translation as a case of something broader, notably Jakobson and Even-Zohar. 

 

To conclude, what can we make of Pym’s story about translation theory? I think Pym makes a very strong case for taking another look at the concept of equivalence; it is by no means a mere historical curiosity. On the other hand, two of the five alternative paradigms (Uncertainty and Cultural Translation) cannot be said to have arisen as responses to equivalence, though after the fact, one can certainly see them in relation to it. Also, is there not some contradiction in saying, at the outset of the book, that the theorizing seen in gender studies or cognitive studies comes from outside translation (and that therefore these studies are excluded from translation theory) but then having two paradigms that arose within philosophy and English/Cultural Studies and have not really seen independent theoretical development within TS?  

 

I would also liked to have seen a few more observations or even speculations on the non-intellectual sources of particular translations theories. Theory is not just a response to previous theory but also to changes in translation practice (technological developments; the organization of the translation profession and industry and of translator training), changes in the role of translation in the theorist’s society, and changes in the world more generally. These external sources do come up, very briefly in Chapters 4 and 6, and extensively in Chapter 7, but what, for example, were the non-intellectual sources of the Descriptions paradigm, and of the Equivalence paradigm itself[3]?

 

At the outset of this review, I mentioned the possibility of alternative stories about translation theory. What might these be? One possibility would be a story which is not restricted to written translation but includes spoken, signed and mixed-mode translating. Another might see the past half century of thinking about translation in terms of a shift from focusing on text to focusing on translators (following up on Pym’s point that the Purposes paradigm expanded theory to include the people who have the purposes). A third might see theoretical developments as a history of reactions to the 1950s idea that there could be a science of translating: originally this meant that the actual practice of translating, by humans and by machines, could somehow be science-based, i.e.  linguistics-based; then with the advent of the Descriptions paradigm, and of observational studies of simultaneous interpreting, the idea appeared of hypothesis-testing against translational data; meanwhile, other theorists preferred to study translating within non-scientific or even anti-scientific frameworks inherited from philosophy, history and literature.



[1] As usual, Western does not include Russian, presumably because like most West Western theorists, Pym does not read Russian, and the works in question have not been translated into a West European language. Pym refers the reader to Fawcett’s Translation and Language (St Jerome 1997), which looks at Shveitser, Retsker and Komissarov, though it does not cover the remarkable 1953 introduction to the theory of translation by Fyodorov.

[2] Linguistics textbooks tend not to discuss translation, but when they do, the discussion is invariably in connection with the notion of semantic fields—how different languages divide up a given field of meaning differently so that, for example, there will be no 1-1 correspondence between the colour words of two languages.

[3] The author does provide more on non-intellectual sources in a 2010 paper “Translation Theory as Historical Problem-Solving”, which can be found on his Web site usuaris.tinet.cat/apym/.