Should China go Roman?

Scripts and Literacy

三月 15, 1996

This volume of essays is the outgrowth of a conference held in Toronto in 1988; its stated purpose was to examine the relation (if any) between scripts and literacy - specifically, the effects of different orthographic systems on learning to read, word recognition, literacy levels in a given culture, and cognitive process. Of course, before any meaningful research can be done on these problems, there must be a workable typology of the world's scripts, both East and West, and here liers the crux of the problem.

The editors, and many of the contributors, have accepted the conventional distinction between logographic systems, in which a sign represents primarily the meaning of a word or morpheme, and only secondarily its pronunciation; and phonetic systems, in which a sign represents primarily a sound unit. According to this orthodox view, derived in part from the late Ignace Gelb's 1952 Study of Writing, the ancient scripts of Egypt, Mesopotamia, China, and Mesoamerica were logographic (and the Chinese writing system remains so), while those systems that evolved from the Phoenician script of the 16th century bc are phonetic (such as the Roman alphabet).

In her essay, Albertine Gaur assures us that Chinese "is still basically a word, or better, a concept script with all the disadvantages and advantages such a system entails . . . The advantages are that as a concept script Chinese does not depend on the spoken word; it can be read without regard to, or even knowledge of, the spoken language." Such a dichotomy of script type formed the basis for various experiments on the learning process among Chinese, Japanese and Korean children on the one hand, and American children on the other. In the course of this research, cited by a number of the essayists, it was even proposed that information derived from "logographs" was processed in the brain's right hemisphere, while phonetic signs (whether alphabetic or syllabic) were processed in the left.

A somewhat more sophisticated typology was proposed by the linguist Geoffrey Sampson in 1986, namely, that all known scripts are either semasiographic, consisting of signs with no known linguistic referents, or glottographic, in which signs are related to specific languages; in the latter category he placed logographic systems (which include Chinese) and phonetic ones; Japanese, with its Chinese-derived kanji characters and phonetic kana syllabary, combines logographic and phonetic features.

Now enter J. Marshall Unger and John DeFrancis in this volume, with a devastating essay on Sampson's typology, which not only throws in doubt the status of semasiography as writing, but shows that the much-vaunted contrast between the supposedly "logographic" Chinese and the phonetic scripts of the west does not really exist, or is trivial.

In their view (expanded by DeFrancis in 1989 in his Visible Speech), the Chinese system is basically a very large syllabary in which the vast majority of "characters" consist of a phonetic-syllabic sign compounded with a semantic indicator; as they point out, if this were a truly "logographic" script, it would be as impossible for a Chinese child to learn as the numbers in a bulky telephone book. In other words, all known scripts, whether ancient or modern, are to a large extent phonetic. If the position taken by Unger and DeFrancis is correct, then one would not expect any profound differences to show up - other than cultural ones unconnected with orthography - in the ease or difficulty of learning to read and write.

My interpretation of these essays (which also include some not very relevant studies of missionary-devised syllabaries among the native North Americans) is that this is the case. Some children in China, Japan and the United States fall behind others in acquiring a reading and writing knowledge of their script, while others are far more proficient. Even with children who are learning to read and write in two scripts (English, with its left-right direction, and Hebrew, read from right to left), those who are good at one are also good at the other - a clear indication to Esther Geva that factors other than script type are operating here. And what about level of literacy? It is generally admitted that Japanese is one of the most complex and "difficult" scripts in existence, yet Japan has one of the world's highest rates of literacy; while Iraq, with its phonetic Arabic script, has one of the lowest. Script type has little or nothing to do with who can or cannot read and write in a society, pace Jack Goody and the anthropologists.

Finally, we find out in Rumjahn Hoosain's contribution that it is now realised that so-called "logographs" and alphabetic signs are both processed in the same hemisphere (the left) - as one would expect if both had a phonetic basis.

What does all this mean for those idealists who wish to reform supposedly backward systems of writing in the name of progress and education? In my opinion, it is all a waste of time, expect in those rare cases where a script has been imposed on a language for which it is badly suited. A good example here would be Turkish, with its all-important vowel synharmony; the Arabic script of the Ottomans had no way of taking this into account, and Ataturk's reforms of 1926 were completely justified. Yet this is the exception to the rule. Those seeking a solution to low literacy rates by "improving" a script (such as the claimed "simplification" of Chinese characters by the PRC) or even substituting a new one (ie romanising the Chinese system in toto) would be better advised to look elsewhere in the culture, where the real problems lie.

Michael D. Coe is the author of Breaking the Maya Code.

Scripts and Literacy: Reading and Learning to Read Alphabets, Syllabaries and Characters

Editor - Insup Taylor and David R. Olson
ISBN - 0 7923 2912 0
Publisher - Kluwer Academic
Price - £94.00
Pages - 375

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.