The saga of the multilingual web

LogoMarie

By Marie Lebert, 23 July 2019.

THE BOOK IN THE COMMUNITY TEXTS OF THE INTERNET ARCHIVE

After the invention of the web in 1990, internet users who were non-native English speakers reached 5 percent in 1994, 20 percent in 1998, 50 percent in 2000 and 75 percent in 2015. Many people helped promote their own language and culture and other languages and cultures — often on their free time and often using English as a lingua franca — for the web to become truly multilingual. This book based on many interviews is a tribute to their hard work and their dedication.


[Web version]
[French version] [+ web version]
[Spanish version] [+ web version]


* Overview
* About the internet
* About encoding
* About internationalization
* About multilingualism
* About website localization
* About remote collaboration
* About e-texts
* About e-books
* About the press
* About a shift in jobs
* About copyright
* About creative copyright
* About bookstores
* About e-bookstores
* About authors
* About best-sellers
* About libraries
* About librarians
* About digital libraries
* About treasures of the past
* About union catalogues
* About linguistic resources
* About dictionaries
* About encyclopedias
* About journals
* About resources for teaching
* About resources for translators
* About terminology databases
* About machine translation
* About computer-assisted translation
* About free machine translation services
* About a catalogue of all living languages
* About minority languages
* About endangered languages
* Some questions
* Timeline


Overview

After the invention of the web in 1990, internet users who were non-native English speakers reached 5 percent in 1994, 20 percent in 1998, 50 percent in 2000 and 75 percent in 2015.

Brian King, director of the Worldwide Language Institute (WWLI) in Europe, brought up the concept of “linguistic democracy” in September 1998 in an email interview: “Whereas ‘mother-tongue education’ was deemed a human right for every child in the world by a UNESCO report in the early 1950s, ‘mother-tongue surfing’ may very well be the Information Age equivalent. If the internet is to truly become the Global Network that it is promoted as being, then all users, regardless of language background, should have access to it. To keep the internet as the preserve of those who, by historical accident, practical necessity, or political privilege, happen to know English, is unfair to those who don’t.”

Maria Victoria Marinetti, a Spanish-language teacher and translator, wrote in August 1999: “It is very important to be able to communicate in various languages. I would even say this is mandatory, because the information given on the internet is meant for the whole world, so why wouldn’t we get this information in our language or in the language we wish? Worldwide information, but no broad choice for languages, this would be quite a contradiction, wouldn’t it?”

Internet users living outside the United States reached 50 per cent in summer 1999. Jean-Pierre Cloutier, editor of Chroniques de Cybérie, a weekly French-language online report of internet news, wrote in August 1999: “We passed a milestone this summer. Now more than half the users of the internet live outside the United States. Next year, more than half of all users will be non-English-speaking, compared with only 5 percent five years ago. Isn’t that great?”

The number of non-native English speakers did reach 50 percent in summer 2000, 52.5 percent in summer 2001, 57 percent in December 2001, 59.8 percent in April 2002 and 64.4 percent in September 2003, according to the marketing consultancy Global Reach.

Fifteen years after the invention of the web, the monthly Wired stated in August 2005 that “less than half of the web is commercial, and the other half is run by passion.” According to the French daily Le Monde on 19 August 2005, “the three powers of the internet — ubiquity, variety and interactivity — make its potential use quasi infinite.”

Many people helped promote their own language and culture and other languages and cultures — often on their free time and often using English as a lingua franca — for the web to become truly multilingual. This book based on many interviews is a tribute to their hard work and their dedication.


About the internet

Eduard Hovy, head of the Natural Language Group at USC/ISI (University of Southern California / Information Sciences Institute), wrote in August 1998 in an email interview: “The internet is, as I see it, a fantastic gift to humanity. It is, as one of my graduate students recently said, the next step in the evolution of information access. A long time ago, information was transmitted orally only; you had to be face-to-face with the speaker. With the invention of writing, the time barrier broke down — you can still read Seneca and Moses. With the invention of the printing press, the access barrier was overcome — now anyone with money to buy a book can read Seneca and Moses. And today, information access becomes almost instantaneous, globally; you can read Seneca and Moses from your computer, without even knowing who they are or how to find out what they wrote; simply open AltaVista and search for ‘Seneca’. This is a phenomenal leap in the development of connections between people and cultures.”

Henri “Henk” Slettenhaar had a long career as a communication systems specialist in Geneva, Switzerland, and in Silicon Valley, California. He was fluent in three languages. He spent his childhood in Holland, taught his courses in English, and learned French while living in neighbouring France. He joined the CERN (European Organization for Nuclear Research) in Geneva in 1958 to work on the first digital computer and the first digital networks. He joined SLAC (Stanford Linear Accelerator Center) to build a film digitizer in 1966 and a monitoring system in 1983. He settled back in Geneva to teach communication technology at Webster University for 25 years, and ran its Telecom Management Program from 2000 onwards.

He wrote in December 1998: “I can’t imagine my professional life without the internet. Most of my communication is now via email. I have been using email for the last twenty years, most of that time to keep in touch with colleagues in a very narrow field. Since the explosion of the internet, and especially the invention of the web, I communicate mainly by email. Most of my presentations are now on the web and the courses I teach are all web-extended. All the details of my Silicon Valley tours are on the web. Without the internet we wouldn’t be able to function. And I use the internet as a giant database. I can find information today with the click of a mouse.”

“I also see multilingualism as a very important issue. Local communities that are on the web should principally use the local language for their information. If they want to present it to the world community as well, it should be in English too. I see a real need for bilingual websites. I am delighted there are so many offerings in the original language now. I much prefer to read the original with difficulty than getting a bad translation.”

He added in August 1999: “There are two main categories of websites in my opinion. The first one is the global outreach for business and information. Here the language is definitely English first, with local versions where appropriate. The second one is local information of all kinds in the most remote places. If the information is meant for people of an ethnic and/or language group, it should be in that language first, with perhaps a summary in English.”

According to him, 2000 was marked by “the explosion of mobile technology. The mobile phone has become for many people, including me, the personal communicator which allows you to be anywhere anytime and still be reachable. But the mobile internet is still a dream. The new services on mobile (GSM) phones are extremely primitive and expensive (WAP = Wait and Pay). Multilingualism has expanded greatly. Many e-commerce websites are multilingual now, and there are companies that sell products which make localization possible.”

He wrote in July 2001: “I am experiencing a tremendous change with having a ‘broadband’ connection at home. To be connected at all times is so completely different from dial-up. I now receive email as soon as it arrives, I can listen to my favorite radio stations wherever they are. I can listen to the news when I want to. Get the music I like all the time. The only thing which is missing is good quality real time video. The bandwidth is too low for that. I now have a wired and a wireless LAN [Local Area Network] in my home. I can use my laptop anywhere in the house and outside, even at my neighbours’ house and still being connected. With the same technology I am now able to use my wireless LAN card in my computer when I travel. For instance, during my recent visit to Stockholm, there was connectivity in the hotel, the conference center, the airport and even in the Irish pub!”

Pierre Schweitzer, designer of the @folio prototype, a mobile device for texts, wrote in December 2006: “The luck we all have is to live here and now this fantastic change. When I was born in 1963, computers didn’t have much memory. Today, my music player could hold billions of pages, a true local library. Tomorrow, by the combined effect of the Moore Law and the ubiquity of networks, we will have instant access to works and knowledge. We won’t be much interested any more on which device to store information. We will be interested in handy functions and beautiful objects.”

According to Jean-Paul, a hypermedia author interviewed in January 2007: “I feel that we are experiencing a ‘floating’ period between the heroic ages, when we were moving forward while waiting for the technology to catch up, and the future, when high-speed broadband will unleash forces that just begin to move, for now only in games.”

The internet of the future could be the “pervasive” network described in 2007 by Rafi Haladjian, founder of the internet provider Ozone, on its website: “The new wave would affect the physical world, our real environment, our daily life in every moment. We will not access the network any more, we will live in it. The future components of this network (wired parts, non-wired parts, operators) will be transparent to the final user. The network will always be open, providing a permanent connection anywhere. It will also be agnostic in terms of applications as a network based on the internet protocols themselves.”


About encoding

Published in 1963 by the American National Standards Institute (ANSI), ASCII (American Standard Code for Information Interchange) is a 7-bit coded character set for information interchange in English (and Latin). Also named Plain Vanilla ASCII, it is a set of 128 characters with 95 printable unaccented characters (A-Z, a-z, numbers, punctuation and basic symbols).

With computer technology spreading outside North America, the accented characters of a few other languages were included in 8-bit variants of ASCII — also called extended ASCII, ISO-8859 or ISO-Latin — that provided sets of 256 characters, for example ISO 8859-1 (ISO-Latin-1) for French, German and Spanish. But these variants quickly became difficult to handle because of data corruption and conversion issues.

Brian King, director of the Worldwide Language Institute (WWLI), wrote in September 1998 in an email interview: “Computer technology has traditionally been the sole domain of a ‘techie’ elite, fluent in both complex programming languages and in English — the universal language of science and technology. Computers were never designed to handle writing systems that could not be translated into ASCII. There was not much room for anything other than the 26 letters of the English alphabet in a coding system that originally couldn’t even recognize acute accents and umlauts — not to mention non-alphabetic systems like Chinese. But tradition has been turned upside down. Technology has been popularized. GUIs like Windows and Macintosh have hastened the process (and indeed it is no secret that it was Microsoft’s marketing strategy to use their operating system to make computers easy to use for the average person). These days this ease of use has spread beyond the PC to the virtual, networked space of the internet, so that now non-programmers can even insert Java applets into their web pages without understanding a single line of code.”

“An extension of (local) popularization is the export of information technology around the world. Popularization has now occurred on a global scale and English is no longer necessarily the lingua franca of the user. Perhaps there is no true lingua franca, but only the individual languages of the users. One thing is certain — it is no longer necessary to understand English to use a computer, nor it is necessary to have a degree in computer science. A pull from non-English-speaking computer users and a push from technology companies competing for global markets has made localization a fast growing area in software and hardware development. This development has not been as fast as it could have been. The first step was for ASCII to become extended ASCII. This meant that computers could begin to start recognizing the accents and symbols used in variants of the English alphabet — mostly used by European languages. But only one language could be displayed on a page at a time.”

“The most recent development [in 1998] is Unicode. Although still evolving and only just being incorporated into the latest software, this new coding system translates each character into 16 bits. Whereas 8-bit extended ASCII could only handle a maximum of 256 characters, Unicode can handle over 65,000 unique characters and therefore potentially accommodate all of the world’s writing systems on the computer. So now the tools are more or less in place. They are still not perfect, but at last we can surf the web in Chinese, Japanese, Korean, and numerous other languages that don’t use the Western alphabet. As the internet spreads to parts of the world where English is rarely used — such as China, for example, it is natural that Chinese, and not English, will be the preferred choice for interacting with it. For the majority of the users in China, their mother tongue will be the only choice.”

First published in January 1991, Unicode “provides a unique number for every character, no matter what the platform, no matter what the program, no matter what the language.” A double-byte platform-independent encoding allows the processing, storage and interchange of text data in any language. Unicode is maintained by the Unicode Consortium, and is a component of the specifications of the World Wide Web Consortium (W3C). Its variants are UTF-8, UTF-16 and UTF-32 (UTF: Unicode Transformation Format). Unicode superseded ASCII (created in 1963) in December 2007 as the main encoding system on the internet.


About internationalization

Babel was a joint project created in 1997 by the Internet Society (ISOC) and the language software company Alis Technologies to contribute to the internationalization of the internet. “Towards communicating on internet in any language…” was the home page’s subtitle. Available in seven languages (English, French, German, Italian, Portuguese, Spanish, Swedish), Babel’s website offered a typographical and linguistic glossary for each language. A dedicated web page named “The Internet and Multilingualism” explained how to develop a multilingual website, and how to code the world’s writing.

As an example, only 15 percent of Europe’s half a billion population spoke English as a first language, 28 percent didn’t speak English at all, and 32 percent could use the web in English.

Babel ran the first major study on the distribution of languages on the web. The results were published in June 1997 on a web page named “The Web Languages Hit Parade”, available in seven languages too. English (82.3 percent) was followed by German (4 percent), Japanese (1.6 percent), French (1.5 percent), Spanish (1.1 percent) , Swedish (1.1 percent), and Italian (1 percent).

According to Global Reach, the fastest growing language groups creating new websites in 1998 were Spanish (22.4 percent), Japanese (12.3 percent), German (14 percent), and French (10 percent).

Randy Hobler, a marketing consultant for translation software and services, wrote in September 1998: “Because the internet has no national boundaries, the organization of users is bounded by other criteria driven by the medium itself. In terms of multilingualism, you have virtual communities, for example, of what I call ‘language nations’ — all those people on the internet wherever they may be, for whom a given language is their native language. Thus, the Spanish Language nation includes not only Spanish and Latin American users, but millions of Hispanic users in the U.S., as well as odd places like Spanish-speaking Morocco. (…) 85 percent of the content of the web is in English and going down. This trend is driven not only by more websites and users in non-English-speaking countries, but by increasing localization of company and organization sites, and increasing use of machine translation to/from various languages to translate websites.”

Yoshi Mikami, a Japanese computer scientist, wrote “The Multilingual Web Guide” (with his colleagues Kenji Sekine and Nobutoshi Kohara) on viewing, understanding and creating multilingual web pages. The book was published in Japanese in August 1997 by O’Reilly Japan, before being translated into English, French and German the following year.

Yoshi Mikami explained in December 1998 in an email interview: “My native tongue is Japanese. Because I had my graduate education in the U.S. and worked in the computer business, I became bilingual in Japanese and American English. I was always interested in languages and different cultures, so I learned some Russian, French and Chinese along the way. In late 1995, I created on the web ‘The Languages of the World by Computers and the Internet’ and tried to summarize there the brief history, linguistic and phonetic features, writing system and computer processing aspects for each of the six major languages of the world, in English and Japanese. As I gained more experience, I invited my two associates to help me write a book on viewing, understanding and creating multilingual web pages, which was published in August 1997 as ‘The Multilingual Web Guide’, in a Japanese edition, the world’s first book on such a subject.”

Yoshi Mikami added: “Thousands of years ago, in Egypt, China and elsewhere, people were more concerned about communicating their laws and thoughts not in just one language, but in several. In our modern world, most nation states have each adopted one language for their own use. I predict greater use of different languages and multilingual pages on the internet, not a simple gravitation to American English, and also more creative use of multilingual computer translation.”

The website of the World Wide Web Consortium (W3C) created its section Internationalization / Localization in 1998, with the protocols needed to create a bilingual or plurilingual website (HTML, base character set, new tags and attributes, HTTP, language negotiation, URLs, and other identifiers including non-ASCII characters).


About multilingualism

Geoffrey Kingscott, director of the language consultancy Praetorius in London, wrote in September 1998 in an email interview: “Because the salient characteristics of the web are the multiplicity of site generators and the cheapness of message generation, as the web matures it will in fact promote multilingualism. The fact that the web originated in the USA means that it is still predominantly in English but this is only a temporary phenomenon. If I may explain this further, when we relied on the print and audiovisual (film, television, radio, video, cassettes) media, we had to depend on the information or entertainment we wanted to receive being brought to us by agents (publishers, television and radio stations, cassette and video producers) who have to subsist in a commercial world or — as in the case of public service broadcasting — under severe budgetary restraints. That means that the size of the customer base is all important, and determines the degree to which languages other than the ubiquitous English can be accommodated. These constraints disappear with the web.”

Alain Bron, an information systems specialist and writer, explained in January 1999: “Different languages will still be used for a long time to come and this is healthy for the right to be different. The risk is of course an invasion of one language to the detriment of others, and with it the risk of cultural standardization. I think online services will gradually emerge to get around this problem. First, translators will be able to translate and comment on texts by request. Then, sites with a large audience will provide different language versions, just as the audiovisual industry does now.”

Marcel Grangier, head of the French Section of the Swiss Federal Government’s Central Linguistic Services, wrote in January 1999: “We can see multilingualism on the internet as a happy and irreversible inevitability. So we have to laugh at the doomsayers who only complain about the supremacy of English. Such supremacy is not wrong in itself, because it is mainly based on statistics (more PCs per inhabitant, more people speaking English, etc.). The answer is not to ‘fight’ English, much less whine about it, but to build more sites in other languages. As a translation service, we also recommend that websites be multilingual. The increasing number of languages on the internet is inevitable and can only boost multicultural exchanges. For this to happen in the best possible circumstances, we still need to develop tools to improve compatibility. Fully coping with accents and other characters is only one example of what can be done.”

According to Bruno Didier, webmaster of the Pasteur Institute Library, interviewed in August 1999: “The internet doesn’t belong to any one nation or language. It is a vehicle for culture, and the first vector of culture is language. The more languages there are on the internet, the more cultures will be represented there. I don’t think we should give in to the knee-jerk temptation to translate web pages into a largely universal language. Cultural exchanges will only be real if we are prepared to meet with the other culture in a genuine way. And this effort involves understanding the other culture’s language. This is very idealistic of course. In practice, when I am monitoring, I curse Norwegian or Brazilian websites where there isn’t any English.”

Steven Krauwer, coordinator of ELSNET (European Network of Excellence in Human Language Technologies), wrote in September 1998: “As a European citizen I think that multilingualism on the web is absolutely essential, as in the long run I don’t think that it is a healthy situation when only those who have a reasonable command of English can fully exploit the benefits of the web; as a researcher (specialized in machine translation) I see multilingualism as a major challenge: how can we ensure that all information on the web is accessible to everybody, irrespective of language differences.”

He added in August 1999: “I have become more and more convinced we should be careful not to address the multilingualism issue in isolation. I have just returned from a wonderful summer vacation in France, and even if my knowledge of French is modest (to put it mildly), it is surprising to see that I still manage to communicate successfully by combining my poor French with gestures, facial expressions, visual clues and diagrams. I think the web (as opposed to old-fashioned text-only email) offers excellent opportunities to exploit the fact that transmission of information via different channels (or modalities) can still work, even if the process is only partially successful for each of the channels in isolation.”

“Multilingualism could be promoted at the author end, at the server end, and at the browser end. At the author end: better education of web authors to use combinations of modalities to make communication more effective across language barriers (and not just for cosmetic reasons). At the server end: more translation facilities such as AltaVista (quality not impressive, but always better than nothing). At the browser end: more integrated translation facilities (especially for the smaller languages), and more quick integrated dictionary lookup facilities.”


About website localization

Yahoo! was the first website to offer its home page in seven languages (English, French, German, Japanese, Korean, Norwegian, Swedish) in late 1997, to serve a growing number of non-English-speaking users.

According to Brian King, director of the Worldwide Language Institute (WWLI), interviewed in September 1998: “Although a multilingual web may be desirable on moral and ethical grounds, such high ideals are not enough to make it other than a reality on a small scale. As well as the appropriate technology being available so that the non-English speaker can go, there is the impact of electronic commerce as a major force that may make multilingualism the most natural path for cyberspace. Sellers of products and services in the virtual global marketplace into which the internet is developing must be prepared to deal with a virtual world that is just as multilingual as the physical world. If they want to be successful, they had better make sure they are speaking the languages of their customers!”

Bill Dunlap has made a life of bringing high-tech products and services to international markets. After graduating from the Massachusetts Institute of Technology (MIT), he created a company to export Apple and PC software to European markets in the early 1980s. He worked as AST Research’s first European sales manager before becoming Compaq’s first sales manager in France. He worked at Compaq’s European headquarters in Munich, Germany, to manage Scandinavian sales. In the mid-1980s, he developed an international marketing consultancy named Euro-Marketing Associates, with two offices in Paris and San Francisco. In 1995, Euro-Marketing Associates was restructured into a virtual consultancy named Global Reach, to help U.S. companies expand their internet presence abroad. This included translating a website into other languages, actively promoting it, and using local online banner advertising to increase local website traffic.

Bill Dunlap wrote in December 1998: “There are so few people in the U.S. interested in communicating in many languages — most Americans are still under the delusion that the rest of the world speaks English. However, in Europe, the countries are small enough so that an international perspective has been necessary for centuries. Since 1981, when my professional life started, I’ve been involved with bringing American companies in Europe. This is very much an issue of language, since the products and their marketing have to be in the languages of Europe in order for them to be visible here. Since the web became popular in 1995 or so, I have turned these activities to their online dimension, and have come to champion European e-commerce among my fellow American compatriots. Most lately at Internet World in New York, I spoke about European e-commerce and how to use a website to address the various markets in Europe.”

He added in July 1999: “Promoting your website is at least as important as creating it, if not more important. You should be prepared to spend at least as much time and money in promoting your website as you did in creating it in the first place. After a website’s home page is available in several languages, the next step is the development of content in each language. A webmaster will notice which languages draw more visitors (and sales) than others, and these are the places to start in a multilingual web promotion campaign. At the same time, it is always good to increase the number of languages available on a website: just a home page translated into other languages would do for a start, before it becomes obvious that more should be done to develop a certain language branch on a website.”

Peter Raggett, head of the library of the Organisation for Economic Cooperation and Development (OECD), wrote in August 1999: “I think it is incumbent on European organizations and businesses to try and offer websites in three or four languages if resources permit. In this age of globalization and electronic commerce, businesses are finding that they are doing business across many countries. Allowing French, German, Japanese speakers to easily read one’s website as well as English speakers will give a business a competitive edge in the domain of electronic trading.”


About remote collaboration

Pierre Ruetschi, a journalist for the Swiss daily Tribune de Genève, interviewed Tim Berners-Lee, inventor of the web, for its 20 December 1997 issue. One of his questions was: “Seven years later, are you satisfied with the way the web has evolved?” Tim Berners-Lee answered he was “pleased with the rich and diverse information” available online, but he would like “the web to be more interactive, and people to be able to create information together”. The web was intended as “a medium for collaboration, a world of knowledge that we share”.

In July 1998, Christiane Jadelot, a researcher at the National Institute of the French Language, wrote about her first steps on the internet, and about the collaborative spirit she found: “I began to really use the internet in 1994, with a browser called Mosaic. I found it a very useful way of improving my knowledge of computers, linguistics, literature… everything. I was finding the best and the worst, but as a discerning user, I had to sort it all out and make choices. I particularly liked the software for email, for file transfers and for dial-up connections. At that time I had problems with a software called Paradox and character sets that I couldn’t use. I tried my luck and threw out a question in a specialist news group. I got answers from all over the world. Everyone seemed to want to solve my problem!”

Robert Ware, a computer scientist, created OneLook Dictionaries in April 1996 as a “fast finder” in 2 million words from 425 dictionaries (in August 1998) in many fields (business, computer / internet, medical, miscellaneous, religion, science, sports, technology, general, slang). He wrote in September 1998: “I was almost entirely in contact with people who spoke one language and did not have much incentive to expand language abilities. Being in contact with the entire world has a way of changing that. And changing it for the better! I have been slow to start including non-English dictionaries (partly because I am monolingual). But you will now find a few included.”

He also wrote about a personal experience relating to languages: “In 1994, I was working for a college and trying to install a software package on a particular type of computer. I located a person who was working on the same problem and we began exchanging email. Suddenly, it hit me… the software was written only 30 miles away [from Englewood, Colorado] but I was getting help from a person half way around the world. Distance and geography no longer mattered! OK, this is great! But what is it leading to? I am only able to communicate in English but, fortunately, the other person could use English as well as German which was his mother tongue. The internet has removed one barrier (distance) but with that comes the barrier of language.”

“It seems that the internet is moving people in two quite different directions at the same time. The internet (initially based on English) is connecting people all around the world. This is further promoting a common language for people to use for communication. But it is also creating contact between people of different languages and creates a greater interest in multilingualism. A common language is great but in no way replaces this need. So the internet promotes both a common language *and* multilingualism. The good news is that it helps provide solutions. The increased interest and need is creating incentives for people around the world to create improved language courses and other assistance, and the internet is providing fast and inexpensive opportunities to make them available.”

Information on the internet needs to be accessible to all — and can be accessible to all. The French association HandicapZéro launched a website in September 2000 for visually impaired and blind users (10 percent of the population), that quickly became popular with 10,000 visits per month. HandicapZéro wanted to demonstrate that, “with the respect of some basic rules, the internet can finally become a space of freedom for all.”

The website was revamped in February 2003 with free access to national and international news in real time (in partnership with AFP-Agence France-Presse), sports news (with the daily L’Équipe), TV shows (with the weekly Télérama), weather (with the national service Météo France), as well as a range of services for health, employment, leisure and telephony. 2 million visitors used the website in 2006.

Blind users can access the website using a Braille device or a speech software. Visually impaired users can set up their own parameters (size and type of fonts, colour of background) on their own “visual profile”, that can be used to read any text document after copying it from the web and pasting it into the interface. Seeing users can correspond in Braille with blind users through the website, with HandicapZéro providing a free transcription, as well as a free print in Braille sent by postal mail (for free in Europe).


About e-texts

The first e-texts available on the internet were e-zines. As explained by John Labovitz, founder of the E-Zine-List, on its website: “For those of you not acquainted with the zine world, ‘zine’ is short for either ‘fanzine’ or ‘magazine’, depending on your point of view. Zines are generally produced by one person or a small group of people, done often for fun or personal reasons, and tend to be irreverent, bizarre, and/or esoteric. Zines are not ‘mainstream’ publications — they generally do not contain advertisements (except, sometimes, advertisements for other zines), are not targeted towards a mass audience, and are generally not produced to make a profit. An ‘e-zine’ is a zine that is distributed partially or solely on electronic networks like the internet.”

One of the first homes for e-zines was the Etext Archives, founded in 1992 by Paul Southworth, and hosted on the website of the University of Michigan. The Etext Archives were “home to electronic texts of all kinds, from the sacred to the profane, and from the political to the personal”, with six sections: (1) “E-zines”: electronic periodicals from the professional to the personal; (2) “Politics”: political zines, essays, and home pages of political groups; (3) “Fiction”: publications of amateur authors; (4) “Religion”: mainstream and off-beat religious texts; (5) “Poetry”: an eclectic mix of mostly amateur poetry; and (6) “Quartz”: the archive formerly hosted at quartz.rutgers.edu.

As recalled on the website in 1998: “The web was just a glimmer, gopher was the new hot technology, and FTP was still the standard information retrieval protocol for the vast majority of users. The origin of the project has caused numerous people to associate it with the University of Michigan, although in fact there has never been an official relationship and the project is supported entirely by volunteer labour and contributions. The equipment is wholly owned by the project maintainers. The project was started in response to the lack of organized archiving of political documents, periodicals and discussions disseminated via Usenet on newsgroups. Not long thereafter, electronic zines (e-zines) began their rapid proliferation on the internet, and it was clear that these materials suffered from the same lack of coordinated collection and preservation, not to mention the fact that the lines between e-zines (which at the time were mostly related to hacking, phreaking, and internet anarchism) and political materials on the internet were fuzzy enough that most e-zines fit the original mission of The Etext Archives. One thing led to another, and e-zines of all kinds — many on various cultural topics unrelated to politics — invaded the archives in significant volume.”

A directory of e-zines around the world was created by John Labovitz in 1993. Updated monthly, the E-Zine-List was a list of e-zines accessible via FTP, gopher, email, the web, and other services. John Labovitz explained on its website: “I started this list in the summer of 1993. I was trying to find some place to publicize Crash, a print zine I’d recently made electronic versions of. All I could find was the alt.zines newsgroup and the archives at The Well and Etext. I felt there was a need for something less ephemeral and more organized, a directory that kept track of where e-zines could be found. So I summarized the relevant info from a couple dozen e-zines and created the first version of this list. Initially, I maintained the list by hand in a text editor; eventually, I wrote my own database program (in the Perl language) that automatically generates all the text, links, and files.”

“In the four years I’ve been publishing the list, the net has changed dramatically, in style as well as scale. When I started the list, e-zines were usually a few kilobytes of plain text stored in the depths of an FTP server; high style was having a gopher menu, and the web was just a rumour of a myth. The number of living e-zines numbered in the low dozens, and nearly all of them were produced using the classic self-publishing method: scam resources from work when no one’s looking. Now the e-zine world is different. The number of e-zines has increased a hundredfold, crawling out of the FTP and gopher woodworks to declaring themselves worthy of their own domain name, even to asking for financial support through advertising. Even the term ‘e-zine’ has been co-opted by the commercial world, and has come to mean nearly any type of publication distributed electronically. Yet there is still the original, independent fringe, who continue to publish from their heart, or push the boundaries of what we call a ‘zine’.” The E-Zine-List included 3,045 e-zines on 29 November 1998.

John Mark Ockerbloom created The Online Books Page in 1993 as “a website that facilitates access to books that are freely readable over the internet. It also aims to encourage the development of such online books, for the benefit and edification of all.” John Mark Ockerbloom was a graduate student at the School of Computer Science (CS) of Carnegie Mellon University (CMU), Pittsburgh, Pennsylvania, and the web was still in its infancy, with Mosaic as its first browser.

John Mark Ockerbloom wrote in September 1998 in an email interview: “I was the original webmaster here at CMU CS, and started our local web in 1993. The local web included pages pointing to various locally developed resources, and originally The Online Books Page was just one of these pages, containing pointers to some books put online by some of the people in our department. (Robert Stockton had made web versions of some of Project Gutenberg’s texts.) After a while, people started asking about books at other sites, and I noticed that a number of sites (not just Gutenberg, but also Wiretap and some other places) had books online, and that it would be useful to have some listing of all of them, so that you could go to one place to download or view books from all over the net. So that’s how my index got started.”

“I eventually gave up the webmaster job in 1996, but kept The Online Books Page, since by then I’d gotten very interested in the great potential the net had for making literature available to a wide audience. At this point there are so many books going online that I have a hard time keeping up. But I hope to keep up my online books works in some form or another. I am very excited about the potential of the internet as a mass communication medium in the coming years. I’d also like to stay involved, one way or another, in making books available to a wide audience for free via the net, whether I make this explicitly part of my professional career, or whether I just do it as a spare-time volunteer.”

The Online Books Page included 7,000 books in 1998, and began listing serials too. As explained on its website: “Along with books, The Online Books Page is also now listing major archives of serials (such as magazines, published journals, and newspapers), as of June 1998. Serials can be at least as important as books in library research. Serials are often the first places that new research and scholarship appear. They are sources for firsthand accounts of contemporary events and commentary. They are also often the first (and sometimes the only) place that quality literature appears. (For those who might still quibble about it, back issues of serials are often bound and reissued as hardbound ‘books’.)”

After graduating from Carnegie Mellon with a Ph.D. in computer science, John Mark Ockerbloom was hired as a digital library planner and researcher at the University of Pennsylvania Library in 1999. He moved The Online Books Page there, and went on expanding it, with links to 12,000 books in 1999, 20,000 books in 2003 (including 4.000 books published by women), 25,000 books in 2006, 30,000 books in 2008 (including 7.000 books from Project Gutenberg), 35,000 books in 2010, and 2 million books in 2015. The books “have been authored, placed online, and hosted by a wide variety of individuals and groups throughout the world.” The website also provides copyright information in many countries, with links to further reading.


About e-books

Most people who started collections of public domain books were inspired by Project Gutenberg, a collection of free e-books created by Michael Hart, a student at the University of Illinois.

As recalled by Michael Hart on the website of Project Gutenberg: “When we started [in 1971], the files had to be very small. So doing the U.S. Declaration of Independence (only 5K) seemed the best place to start. This was followed by the Bill of Rights, then the whole U.S. Constitution, as space was getting large (at least by the standards of 1973). Then came the Bible, as individual books of the Bible were not that large, then Shakespeare (a play at a time), and then into general work in the areas of light and heavy literature and references. By the time Project Gutenberg got famous, the standard was 360K disks, so we did books such as ‘Alice in Wonderland’ or ‘Peter Pan’ because they could fit on one disk. Now 1.44 is the standard disk and ZIP is the standard compression; the practical file size is about three million characters, more than long enough for the average book.”

Michael Hart wrote in August 1998 in an email interview: “We consider e-text to be a new medium, with no real relationship to paper, other than presenting the same material, but I don’t see how paper can possibly compete once people each find their own comfortable way to e-texts, especially in schools.” An e-text was a continuous text file instead of a set of pages, using Plain Vanilla ASCII (the low set of ASCII), with caps for the terms in italic, bold or underlined of the print version, for it to be read on any hardware and software. As a text file, a book could be easily copied, indexed, searched, analyzed and compared with other books.

Project Gutenberg offered books in 25 languages in early 2004, in 42 languages (including Sanskrit and the Mayan languages) in July 2005, and in 59 languages in October 2010. The 10 main languages were English (with 28,441 e-books on 7 October 2010), French (1,659 e-books), German (709 e-books), Finnish (536 e-books), Dutch (496 e-books), Portuguese (473 e-books), Chinese (405 e-books), Spanish (295 e-books), Italian (250 e-books), and Greek (101 e-books).

Distributed Proofreaders was created by Charles Franks in 2000 to support the digitization of public domain books. The books were scanned from a print edition, and converted into text before being proofread by volunteers. Distributed Proofreaders became the main source of Project Gutenberg by providing a total of 10,000 e-books in December 2006, a total of 20,000 e-books in April 2011, and a total of 30,000 e-books in July 2015, as “unique titles [sent] to the bookshelves of Project Gutenberg, free to enjoy for everybody. Distributed Proofreaders is a truly international community. People from over the world contribute.”

Michael Hart often stated in his writings that, while Johannes Gutenberg, the inventor of the printing press, had allowed some people to own printed books, Project Gutenberg would allow everyone to own a free library stored in a pocket device or on a USB flash drive. Project Gutenberg reached the milestone of 50,000 e-books in September 2015.

Projekt Runeberg was the first free Swedish collection, and a partner of Project Gutenberg. It was created in 1992 by the computer club Lysator, in cooperation with Linkoping University, as a student volunteer project to create and collect free electronic editions of classic Nordic literature. Around 200 e-books were available in 1998. The website also provided a list of 6,000 Nordic authors as a tool for further collection development.

ABU-La Bibliothèque Universelle (ABU-The Universal Library) was the first free French collection, inspired by Project Gutenberg. It was created in 1993 by ABU (Association of Universal Bibliophiles) to create free electronic editions of classic French literature. 223 e-books from 76 authors were available in 1998.

Projekt Gutenberg-DE was the first free German collection, created in 1994 as a partner of Project Gutenberg. Texts were available for online reading, with one web page for short texts, and several web pages (one per chapter) for longer works. The website also provided an alphabetic list of authors and titles, and a short biography and bibliography for each author.

Athena was the first free Swiss collection, created in 1994 by Pierre Perroud, a teacher at College Voltaire in Geneva, and hosted by the University of Geneva. The bilingual French-English website offered a plurilingual collection in philosophy, science, literature, history and economics. Athena either digitized books (200 titles in December 1997) or provided links to e-books digitized elsewhere (3,500 titles in December 1997, and 8,000 titles one year later). The web page Helvetia offered e-books about Switzerland. The web page Athena Literature Resources provided links to other digital collections worldwide.

Pierre Perroud wrote in the February 1996 issue of the Swiss computer magazine Informatique-lnformations: “Electronic texts are an encouragement to reading and a convivial participation in the dissemination of culture. They are a good complement to the printed book, which remains irreplaceable for ‘true’ reading. The printed book remains a mysteriously holy companion with a deep symbolism for us: we grip it in our hands, we hold it against us, we look at it with admiration; its small size comforts us and its content impresses us; its fragility holds a density we are fascinated by; like man it fears water and fire, but it has the power to shelter man’s thoughts from time.”

Progetto Manuzio was the first free Italian collection, created in 1995 by Liber Liber, an association “promoting artistic and intellectual creative expression, and using computer technology to link the humanities with the sciences”. Progetto Manuzio was named after the famous 16th-century Venetian publisher who improved the printing techniques invented by Gutenberg. As explained on its website, Progetto Manuzio wanted “to make a noble idea real: the idea of making culture available to everybody. How? By making books, theses, articles, fiction — or any other document which could be digitized — available worldwide, at any time, and free of charge. Via a modem or floppy disks (in this case, by adding the cost of a blank disk and postal fees), it is already possible to get hundreds of books. And Progetto Manuzio needs only a few people to make a masterpiece like ‘La Divina Commedia’ by Dante Alighieri available to millions of people.”

Some collections were organized around an author or several authors, for example The Marx/Engels Internet Archive (MEIA). MEIA was created in 1996 to offer a timeline of the collected works of Karl Marx and Frederick Engels, with links to the digital editions of these works. As explained on its website: “There’s no way to monetarily profit from this project. ‘Tis a labour of love undertaken in the purest communitarian sense. The real ‘profit’ will hopefully manifest in the form of individual enlightenment through easy access to these classic works. Besides, transcribing them is an education in itself. Let us also add that this is not a sectarian / One-Great-Truth effort. Help from any individual or any group is welcome. We have but one slogan: ‘Piping Marx & Engels into cyberspace!'”

The web page Biographical Archive offered extensive biographies for Marx and Engels, and short biographies for family members and friends. The web page Photo Gallery offered photos of the Marx and Engels clan from 1839 to 1894, and photos of their dwellings from 1818 to 1895. The web page Others listed works by all main Marxist writers, for example James Connolly, Daniel DeLeon and Hal Draper, and a short biography for each. The web page Non-English Archive listed works by Marx and Engels that were freely available online in other languages (Danish, French, German, Greek, Italian, Japanese, Polish, Portuguese, Spanish, Swedish). MEIA was later renamed the Marxists Internet Archive.


About the press

The first electronic versions of printed newspapers were available in the early 1990s through commercial services such as America OnLine (AOL) and CompuServe. Newspapers began offering websites in the mid-1990s, with a partial or full version of their latest issue, available for free or with a paid subscription, as well as online archives.

In the United States, the website of The New York Times could be accessed free of charge in 1996, with the content of the printed newspaper, updates with breaking news every ten minutes, and “original reporting available only online”. The website of The Washington Post provided free daily news and a database of previous articles, with images, sound and video. The website of The Wall Street Journal was available with a paid subscription (100,000 subscribers in 1998).

In the United Kingdom, the daily The Times and the weekly The Sunday Times set up the website Times Online, with a way to create a personalized edition. The weekly The Economist went online too. In other European countries, the daily newspaper El País (Spain), the daily newspapers Le Monde, Libération and L’Humanité (France) and the weekly magazines Der Spiegel and Focus (Germany) created their own websites.

Readers could access newspapers that were difficult to find in their own country, for example the Algerian daily El Watan, which created its website in October 1997. In an article published on 23 March 1998 in the French daily Le Monde, Redha Belkhat, chief editor of El Watan, explained: “For the Algerian diaspora, to find in London, New York or Ottawa a paper issue of El Watan that is less than a week old can be a challenge. Now the newspaper is available in print in Algiers at 6 AM, and is available on the internet at noon.”

“More than 3,600 newspapers now publish on the internet, but there are signs that the tide of growth may ebb”, journalist Eric K. Meyer wrote in an essay published in December 1997 on the website of AJR/NewsLink. “A full 43% of all online newspapers now are based outside the United States. A year ago, only 29% of online newspapers were located abroad. Rapid growth, primarily in Canada, the United Kingdom, Norway, Brazil and Germany, has pushed the total number of non-U. S. online newspapers to 1,563. The number of U.S. newspapers online also has grown markedly, from 745 a year ago to 1,290 six months ago to 2,059 today. Outside the United States, the United Kingdom, with 294 online newspapers, and Canada, with 230, lead the way. In Canada, every province or territory now has at least one online newspaper. Ontario leads the way with 91, Alberta has 44, and British Columbia has 43. Elsewhere in America, Mexico has 51 online newspapers, 23 newspapers are online in Central America and 36 are online in the Caribbean. Europe is the next most wired continent for newspapers, with 728 online newspaper sites. After the United Kingdom, Norway has the next most — 53 — and Germany has 43. Asia (led by India) has 223 online newspapers. South America (led by Bolivia) has 161 and Africa (led by South Africa) has 53. Australia and other islands have 64 online newspapers.”

Kushal Dave, then a high school freshman at Yale University, wrote in his article “The Future of Publishing” in 1998: “It is now possible to read Associated Press Reports as they are released, not in the next morning’s paper, and you don’t even have to pay the 25 cents. Cost, speed, and availability are just some of the compelling arguments for electronic publishing instead of paper. There are also many companies attempting to capitalize on the multimedia possibilities of electronic publishing. Sound and pictures are being incorporated in low-cost Internet World Wide Web ‘publications’, and companies like Medio and Nautilus are producing CD-ROMs that represent the new generation of periodicals — now music reviews include sound clips, movie reviews include trailers, book reviews include excerpts, and how-to articles include demonstrative videos. All this is put together with low costs, high speed, and many advantages.”

“The internet is certainly a more accessible and convenient medium, and thus it would be better in the long run if the strengths of the print media could be brought online without the extensive costs and copyright concerns that are concomitant. As the transition is made, the neat thing is a growing accountability for previously relatively irreproachable edifices. For example, we already see email addresses after articles in publications, allowing readers to pester authors directly. Discussion forums on virtually all major electronic publications show that future is providing not just one person’s opinion but interaction with those of others as well. Their primary job is the provision of background information. Also, the detailed statistics can be gleaned about interest in an advertisement or in content itself will force greater adaptability and a questioning of previous beliefs gained from focus groups. This means more finely honed content for the individual, as quantity and customizability grows.”

Online newsletters were created without a print version. They were sent by email or available on a website. As recalled in July 1999 by Jacques Gauchey, a journalist and writer in information technology: “In 1996 I published a few issues of a free English newsletter on the internet. It had about ten readers per issue until the day when the electronic version of Wired created a link to it. In one week I got about 100 emails, some from French readers of my book ‘La Vallée du Risque: Silicon Valley’ [The Valley of Risk: Silicon Valley, published by Plon in 1990], who were happy to find me again.”


About a shift in jobs

In the press industry, journalists were now typing in their articles online. Articles went directly from text to layout, without being keyed in anymore by the production staff. In the book industry, digitization speeded up the editorial process, which used to be sequential, by allowing the copy editor, the image editor and the layout staff to work at the same time on the same book.

Organized by the International Labour Office (ILO), the first Symposium on Multimedia Convergence was held in January 1997 in Geneva, Switzerland, “to stimulate reflection on the policies and approaches most apt to prepare our societies and especially our workforces for the turbulent transition towards an information economy.” Discussions between employers, unionists and government representatives from all over the world focused on three main subjects: (a) the information society: what it meant for governments, employers and workers; (b) the convergence process: its impact on employment and work; and (c) labour relations in the information age.

Wilfred Kiboro, managing director of The Nation Printers and Publishers in Kenya, explained that, in his country, each newspaper copy was read by at least twenty people. Distribution costs could drop with the use of a printing system by satellite, avoiding the need to deliver newspapers by truck throughout the country. “Information technology needs to be brought to affordable levels. I have a dream that perhaps in our lifetime in Africa, we will see villagers being able to access the internet from their rural villages where today there is no water and no electricity.”

There were also cultural issues. African newspapers, radio programs and TV channels belonged to a few western media groups that gave their own vision of Africa, without fully understanding its economic and social issues. And there were uncertainties about the work itself: “In content creation in the multimedia environment, it is very difficult to know who the journalist is, who the editor is, and who the technologist is that will bring it all together. At what point will telecom workers become involved as well as the people in television and other entities that come to create new products? Traditionally in the print media, for instance, we had printers, journalists, sales and marketing staff and so on, but now all of them are working on one floor from one desk.”

According to Bernie Lunzer, secretary-treasurer of the Newspaper Guild in the United States: “Our reporters have seen new deadline pressures build as the material is used throughout the day, not just at the end of the day. There is also a huge safety problem in the newsrooms themselves due to repetitive strain injuries. Some people are losing their careers at the age of 35 or 40, a problem that was unheard of in the age of the typewriter. But as people work 8-to 10-hour shifts without ever leaving their terminals, this has become an increasing problem.”

Carlos Alberto de Almeida, president of the National Federation of Journalists (FENAJ) in Brazil, shared similar concerns: “Technology offers the opportunity to rationalize work, to reduce working time and to encourage intellectual pursuits and even entertainment. But so far none of this has happened. On the contrary, media professionals — whether executives, journalists or others — are working longer and longer hours. If one were to rigorously observe the labour legislation and the rights of professionals, then the extraordinary positive aspects of these new technologies would emerge. This has not been the case in Brazil. Journalists can be easily phoned on weekends to do extra work without extra pay.”

Some people participating in the symposium, mostly employers, explained that the information society was generating or would generate jobs, while other participants, mostly unionists, explained that unemployment was rising worldwide.

Etienne Reichel, acting director of VisCom (Visual Communication) in Switzerland, explained: “The work of 20 typesetters is now carried out by six qualified workers. There has also been a concentration of centers of production, thus placing enormous pressure on the small and medium-sized enterprises which are traditional sources of employment. Computer science makes it possible for experts to become independent producers. Approximately 30 per cent of employees have set up independently and have been able to carve out part of the market.”

Peter Leisink, associate professor of labour studies at Utrecht University, Netherlands, explained: “A survey of the United Kingdom book publishing industry showed that proofreaders and editors have been externalized and now work as home-based teleworkers. The vast majority of them had entered self-employment, not as a first-choice option, but as a result of industry mergers, relocations and redundancies. These people should actually be regarded as casualized workers, rather than as self-employed, since they have little autonomy and tend to depend on only one publishing house for their work.”

Michel Muller, secretary-general of the French Federation of the Book, Paper and Communication Industries (FILPAC), stated that jobs in these industries fell from 110,000 to 90,000 in ten years (1987-96), with expensive social plans to retrain and reemploy the 20,000 people who had lost their jobs. “If the technological developments really created new jobs, as had been suggested, then it might have been better to invest the money in reliable studies about what jobs were being created and which ones were being lost, rather than in social plans which often created artificial jobs. These studies should highlight the new skills and qualifications in demand as the technological convergence process broke down the barriers between the printing industry, journalism and other vehicles of information. Another problem caused by convergence was the trend towards ownership concentration. A few big groups controlled not only the bulk of the print media, but a wide range of other media, and thus posed a threat to pluralism in expression. Various tax advantages enjoyed by the press today should be re-examined and adapted to the new realities facing the press and multimedia enterprises. Managing all the social and societal issues raised by new technologies required widespread agreement and consensus. Collective agreements were vital, since neither individual negotiations nor the market alone could sufficiently settle these matters.”

Walter Durling, director of AT&T Global Information Solutions in the United States, had a quite theoretical statement on the matter: “Technology would not change the core of human relations. More sophisticated means of communicating, new mechanisms for negotiating, and new types of conflicts would all arise, but the relationships between workers and employers themselves would continue to be the same. When film was invented, people had been afraid that it could bring theater to an end. That has not happened. When television was developed, people had feared that it would do away cinemas, but it had not. One should not be afraid of the future. Fear of the future should not lead us to stifle creativity with regulations. Creativity was needed to generate new employment. The spirit of enterprise had to be reinforced with the new technology in order to create jobs for those who had been displaced. Problems should not be anticipated, but tackled when they arose.”

In fact, people were less afraid of technology than they were afraid of losing their jobs. In 1997, unemployment was already significant anywhere, which was not the case when film and television were invented. Unions were struggling worldwide to promote the creation of jobs through investment, innovation, vocational training, computer literacy, retraining in digital technology, fair conditions for labour contracts and collective agreements, fair conditions for the reuse of articles from the print media to the web, protection of workers in the art field, and defense of teleworkers as workers having full rights.

Despite the unions’ efforts, would the situation become as tragic as what was suggested in a note of the symposium’s proceedings? “Some fear a future in which individuals will be forced to struggle for survival in an electronic jungle. And the survival mechanisms which have been developed in recent decades, such as relatively stable employment relations, collective agreements, employee representation, employer-provided job training, and jointly funded social security schemes, may be sorely tested in a world where work crosses borders at the speed of light.”


About copyright

A major issue was the handling of copyright on the internet. Bernie Lunzer, secretary-treasurer of the Newspaper Guild in the United States, explained during the same symposium: “There is a huge battle over intellectual property rights, especially with freelancers, but also with our members who work under collective bargaining agreements. The freelance agreements that writers are asked to sign are shocking. Bear in mind that freelance writers are paid very little. They turn over all their future rights — reuse rights — to the publisher and very little in exchange. Publishers are fighting for control and ownership of product, because they are seeing the future.”

Heinz-Uwe Rübenach, president of the German Newspapers Publishers Association, explained: “Copyright is one of the keys to the future information society. If a publishing house which offers the journalist work, even on an online service, is not able to manage and control the use of the resulting product, then it will not be possible to finance further investments in the necessary technology. Without that financing, the future becomes less positive and jobs can suffer. If, however, publishers see that they are able to make multiple use of their investment, then obviously this is beneficial for all. Otherwise the costs associated with online services would increase considerably. As far as the European market is concerned, this would only increase competitive pressures, since United States publishers do not have to pay for multiple uses.”

In “Intellectual Property Rights and the World Wide Web”, an article published in 1997 on the website of AJR/NewsLink, Penny Pagano, a freelance author, wrote: “Today, those who create information and those who publish, distribute and repackage it are finding themselves at odds with each other over the control of electronic rights. (…) ‘The electronic explosion has changed the entire nature of the business,’ Carlinsky [vice president of the American Society of Journalists and Authors] says. In the past, articles sold to a periodical essentially ‘turned into a pumpkin with no value’ once they were published. ‘But the electronic revolution has extended the shelf life of content of periodicals. You can now take individual articles and put them into a virtual bookstore or put them on a virtual newsstand.’ The second major change in recent years, he says, is ‘an increasing trend to more and more publications being owned by fewer larger and larger companies that tend to be international media conglomerates. They are connected corporately with an enormous array of enterprises that might be interested in secondary use of materials.’”

For the authors’ secondary rights, “the National Writers Union has created a new agency called the Publication Rights Clearinghouse (PRC). Based on the music industry’s ASCAP [American Society of Composers, Authors, and Publishers], PRC will track individual transactions and pay out royalties to writers for secondary rights for previously used articles. For $20, freelance writers who have secondary rights to previously published articles can enroll in PRC. These articles become part of a PRC file that is licensed to database companies.”

A major blow for digital libraries based in the United States was the Digital Millennium Copyright Act (DMCA), signed in October 1998 in the wake of the WIPO Copyright Treaty (WCT) signed in December 1996 by the member states of the World Intellectual Property Organization (WIPO) to “deal with the protection of works and the rights of their authors in the digital environment”. A number of books that were about to enter public domain stayed copyrighted. As explained by John Mark Ockerbloom in October 1998 in The Online Books Page’s News section: “The copyright extension bill will prevent books published in 1923 and later that are not already in the public domain from entering the public domain in the United States for at least 20 years. I have started a page to provide access to copyright renewal records, which eventually should make it easier to find books published after 1922 that have entered the public domain due to nonrenewal.”

John Mark Ockerbloom explained in August 1999 in an email interview: “I think it is important for people on the web to understand that copyright is a social contract that is designed for the public good — where the public includes both authors and readers. This means that authors should have the right to exclusive use of their creative works for limited times, as is expressed in current copyright law. But it also means that their readers have the right to copy and reuse the work at will once copyright expires. In the U.S. now, there are various efforts to take rights away from readers, by restricting fair use, lengthening copyright terms (even with some proposals to make them perpetual) and extending intellectual property to cover facts separate from creative works (such as found in the ‘database copyright’ proposals). There are even proposals to effectively replace copyright law altogether with potentially much more onerous contract law. Stakeholders in this debate have to face reality, and recognize that both producers and consumers of works have legitimate interests in their use. If intellectual property is then negotiated by a balance of principles, rather than as the power play it too often ends up being (‘big money vs. rogue pirates’), we may be able to come up with some reasonable accommodations.”

Project Gutenberg created the page “Copyright HowTo” for the volunteers digitizing books for its collection. It could be summarized as such: (a) works published before 1923 entered the public domain no later than 75 years from the copyright date: all these works belong to public domain; (b) works published between 1923 and 1977 retain copyright for 95 years: no such works will enter the public domain until 2019; (c) works created from 1978 on enter the public domain 70 years after the death of the author if the author is a natural person: nothing will enter the public domain until 2049; (d) works created from 1978 on enter the public domain 95 years after publication or 120 years after creation if the author is a corporate one: nothing will enter the public domain until 2074.

Copyright legislation became more restrictive too in Europe. The European Union Copyright Directive (Directive 2001/29/EC on the harmonization of certain aspects of copyright and related rights in the information society) was ratified by the European Commission in May 2001. A copyright based on “author’s life plus 70 years” replaced the copyright “author’s life plus 50 years” in use in most European countries, following pressure from major content owners who successfully lobbied for the “harmonization” of national copyright laws as a response to the globalization of the market. All member countries were required to adapt their own copyright legislation accordingly within a given time frame. In its article 43, the same directive asked all member states to adopt measures for handicapped users to be able to access books, and to promote accessible formats.


About creative copyright

The authors I was interviewing in the late 1990s shared their thoughts on the matter. According to Jacques Gauchey, a journalist and writer living in San Francisco: “Copyright in its traditional context doesn’t exist anymore. Authors have to get used to a new situation: the total freedom of the flow of information. The original content is like a fingerprint: it can’t be copied. So it will survive and flourish.” (July 1999)

According to Guy Antoine, who created Windows on Haiti, a website about Haitian culture, from New Jersey: “The debate will continue forever, as information becomes more conspicuous than the air that we breathe and more fluid than water. Authors will have to become a lot more creative in terms of how to control the dissemination of their work and profit from it. The best that we can do right now is to promote basic standards of professionalism, and insist at the very least that the source and authorship of any work be duly acknowledged. Technology will have to evolve to support the authorization process.” (November 1999)

According to Alain Bron, an information systems consultant and writer living in Paris: “I consider the web today as a public domain. That means in practice that the notion of copyright on it disappears: everyone can copy everyone else. Anything original risks being copied at once if copyrights are not formally registered or if works are available without payment. A solution is to make people pay for information, but this is no watertight guarantee against it being copied.” (November 1999)

Some people worked on creative solutions to adapt copyright to the web, with copyleft, the GPL (GNU General Public License) and the Creative Commons licenses.

The term “copyleft” was invented in 1984 by Richard Stallman, a software developer at MIT (Massachusetts Institute of Technology), who created the Free Software Foundation (FSF) and the GNU Project, a collaborative project for the development of free software. As explained on its website: “Copyleft is a general method for making a program or other work free, and requiring all modified and extended versions of the program to be free as well. Copyleft says that anyone who redistributes the software, with or without changes, must pass along the freedom to further copy and change it. Copyleft guarantees that every user has freedom. Copyleft is a way of using the copyright on the program. It doesn’t mean abandoning the copyright; in fact, doing so would make copyleft impossible. The word ‘left’ in ‘copyleft’ is not a reference to the verb ‘to leave’ — only to the direction which is the inverse of ‘right’.”

“The GNU General Public License (GPL) is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program — to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.(…) The GNU Free Documentation License (GFDL) is a form of copyleft intended for use on a manual, textbook or other document to assure everyone the effective freedom to copy and redistribute it, with or without modifications, either commercially or non-commercially.”

Creative Commons (CC) was founded in 2001 by Lawrence Lessig, a professor of law at the Stanford Law School in California. As explained on its website: “Creative Commons is a nonprofit corporation dedicated to making it easier for people to share and build upon the work of others, consistent with the rules of copyright. We provide free licenses and other legal tools to mark creative work with the freedom the creator wants it to carry, so others can share, remix, use commercially, or any combination thereof.”

Who has used Creative Commons? O’Reilly Media for its books, Wikipedia for its articles, and the Public Library of Science (PLOS) for its journals, for example. There were one million Creative Commons licensed works in 2003, 4.7 million licensed works in 2004, 20 million licensed works in 2005, 50 million licensed works in 2006, 90 million licensed works in 2007, 130 million licensed works in 2008, 400 million licensed works in 2010, and 882 million licensed works in 2014.


About bookstores

Based in the United Kingdom, the Internet Bookshop (IBS) experimented innovative ideas that would later inspire American online bookstore Amazon and others. The Internet Bookshop was Europe’s largest online bookstore with 1.4 million titles in 1997, long before Amazon launched Amazon UK and Amazon DE (Germany) in October 1998.

It first developed a system of affiliates in January 1997. Any website owner could recommend books and sell them, with the Internet Bookshop handling secure online ordering, customer service, shipping and weekly sales reports by email. The affiliates earned 10 percent of the sales, with the need for a new legislation for earnings made through the internet. Affiliates ranged from readers or authors to publishers, businesses, nonprofits, and more. Amazon created its own Associates Program in June 1997.

The Internet Bookshop began selling books published in the United States to its own clients in September 1997, followed by British retailer Waterstones in 1998. The Publishers Association tried to stop them, and banned online bookstores based in the U.S. from selling books to customers based in the U.K. The Internet Bookshop started offering significant discounts in October 1997, with discounts up to 45 percent for some best-sellers, with lawsuits from physical bookstores and publishers. On its website, the section IBS News gave daily news on the frictions, debates, complaints and tense negotiations between online bookstores and physical bookstores, between online bookstores and associations of publishers, and between the online bookstores themselves, that were claiming priority for the customers living in the country where they were based.

The Internet BookShop also led the fight to remove national borders for selling books, including for taxation. Customers began buying books across borders, and the legislation followed. An outline agreement was concluded in December 1997 between the European Union and the United States, and followed by an international convention. The internet became a free trade area for books, movies and software bought online. Online goods (books, CDs, DVDs) and services were subject to existing regulations, with the collection of VAT, but with no additional custom taxes.

Amazon’s two first subsidiaries outside the United States were created in October 1998 in the United Kingdom and in Germany. Amazon had 1.8 million clients in the United Kingdom, 1.2 million clients in Germany, and less than 1 million clients in France in August 2000, when it opened Amazon France, with books, music, DVDs and videos (software and video games were added in June 2001) and a 48-hour delivery. Online sales were only 0.5 percent of the book market in France (5.4 percent in the U.S.).

The opening of Amazon France was announced in August 2000, at the last minute, after months of secrecy. Unlike their counterparts in the United States and in the United Kingdom, where book prices were free, French online bookstores couldn’t offer significant bargains. Prices were regulated by the Lang Law, fathered by French minister of culture Jack Lang to protect independent bookstores. The 5 percent discount allowed to any bookstore (physical or online) offered little latitude to online bookstores, which were nevertheless optimistic about the prospects of e-commerce.

Amazon’s economic model was admired by many in Europe, but could hardly be considered a model for staff management, with short-term labour contracts, low wages and poor working conditions. Despite the secrecy surrounding the working conditions of Amazon’s employees, problems began to filter. In November 2000, after meeting with 50 employees from Amazon’s distribution center in Boigny-sur-Bionne, France, the U.K.-based Prewitt Organizing Fund and the French union SUD-PTT Loire-Atlantique launched an awareness campaign among Amazon France’s employees. In a statement following the meeting, SUD-PTT Loire Atlantique reported “degraded working conditions, flexible schedules, short-term labour contracts in periods of flux, low wages, and minimal social guarantees”. Similar actions took place in Germany and in the United Kingdom. Patrick Moran, head of the Prewitt Organizing Fund, founded the Alliance of New Economy Workers, an employee organization. In response, Amazon sent internal memos to its employees, stating the pointlessness of unions within the company.

After a deficit in the fourth quarter 2000, Amazon reduced its workforce by 15 percent in January 2001, and 270 employees lost their jobs in Europe (1,300 employees lost their jobs in the U.S.). Amazon closed its customer service center in The Hague, Netherlands, and its 240 employees were offered to relocate in one of the remaining customer service centers in Slough, United Kingdom, or in Regensberg, Germany. Amazon diversified the products sold online. Cultural products represented only 58 percent of all sales in November 2001. For its 10th anniversary in July 2005, Amazon had 9,000 employees and 41 million customers worldwide for a whole range of products they could get within 48 hours in one of the seven countries (United States, United Kingdom, Germany, France, Japan, China, Canada) with an Amazon platform and distribution center.

What about small bookstores? Local brick-and-mortar bookstores closed one after the other, but some of them fought back, like Librairie Ulysse (Ulysses Bookstore) in Paris, a travel bookstore created in 1971 by Catherine Domain, that could be the oldest travel bookstore in the world. Its 20,000 out-of-print or new books, periodicals and maps — in a number of languages and on any country — were packed up in a tiny space, on Île Saint-Louis surrounded by the Seine river. The bookstore held treasures that were impossible to find anywhere else.

Catherine Domain recounted her first steps as a bookseller on her website: “After traveling for ten years on every continent, I stopped and told myself: ‘What am I going to do for a living?’ I was aware I needed to be part of society in one way or another. I made a choice by deduction, because I didn’t want to have an employer or an employee. I remembered my grandfathers; one was a navigator, and the other one was a bookseller in Perigord [a region in southern France]. I also realized that I needed to visit more than a dozen bookstores before finding any documentation on a country as close as Greece. So a travel bookstore came to my mind during a world tour, while I was sailing between Colombo and Surabaya.”

“Back in Paris, I looked for a store, inquired about being a bookseller, worked as an intern in other bookstores, wrote index cards, and thought about a name for my new business. One morning, when I was on my way to buy my daily newspaper, I looked up and saw the store ‘Ulysse’ — a reference to Joyce — at number 35 in the street Saint-Louis-en-Île. ‘Here is a good name!’, I told myself. I climbed two stairs and entered a very small 16m2 [172 square feet] store with a single beam ceiling. Four guys were playing poker. ‘What a cute store!’, I said. ‘It is for sale’, one player said without even looking up. 48 hours later, I was a bookseller. This was in September 1971. The first travel bookstore in the world was born.”

“Twenty years later, my store didn’t survive the real estate development in the heart of Paris and its rocket prices, and I had to move out, like many people in my neighbourhood. Luckily, my stubborn side — I am a Taurus ascendant Taurus — gave me enough energy to move my bookstore a few meters away in a larger place, at number 26 in the street Saint-Louis-en-l’Île, in a quite uncommon building, for two reasons. First, this was the building where I first lived in Île Saint-Louis a long time ago. Second, this building formerly hosted the branch of a bank that was famously burglarized by Spaggiari [before his even more famous bank burglary in Nice, in southern France].”

Catherine Domain sailed on all the oceans of the globe and visited 140 countries, and some trips were quite challenging. But her most difficult challenge was to set up a website on her own, from scratch, without knowing anything about computers. She wrote in December 1999 in an email interview: “My first year with a computer and the internet was one long technical agony! My site is still pretty basic and under construction. Like my bookstore, it is a place to meet people before being a place to do business. The internet is a pain in the neck, takes a lot of my time and I earn hardly any money from it, but that doesn’t worry me. I am very pessimistic though, because the internet is killing off specialist bookstores.”

However she created in 2005 a second travel bookstore, this time facing the ocean, in Hendaye, near the Spanish frontier. At high tide, the bookstore is like a boat of books, and its floor is sometimes flooded by the sea. It is open from 20 June to 20 September, while her partner, an old maps specialist, runs her bookstore in Paris.

Catherine Domain wrote in April 2010: “The internet has taken more and more space in my life! I have become a publisher after some painful training to learn how to use Photoshop and InDesign. This is also a great joy to see that the political will to keep people in front of their computers — for them not to start a revolution — can be defeated by giant spontaneous happy hours [organized through Facebook] with thousands of people who want to see each other and speak with one another in person. In the end, there will always be unexpected developments to new inventions. When I started using the internet, I didn’t expect at all to become a publisher.”


About e-bookstores

Bookstores began selling e-books. These e-bookstores were either digital bookstores (Palm Digital Media, Yahoo! eBookStore, Mobipocket, Numilog) or part of an online bookstore that also sold printed books (Amazon, Barnes & Noble.com).

Denis Zwirn, founder and head of Numilog, wrote in August 2007 in an email interview: “E-books are not any more a topic for symposiums, conceptual definitions, or divination by some ‘experts’. They are a commercial product and a tool for reading. We need to offer books that can be easily read on any electronic device used by customers, sooner or later with an electronic ink display. And to offer them as an industry. The digital book is not — and will never be — a niche product (dictionaries, travel guides, books for blind users). It is becoming a mass market product, with multiple forms, like the traditional book.”

The first e-books were PDF files. A veteran format created in June 1993, PDF was perfected over the years as a global standard for information distribution and viewing. Acrobat Reader and Adobe Acrobat gave the tools to view and create PDF files in several languages and for several platforms (Windows, Mac, Linux, Palm OS, Pocket PC, Symbian OS, and others). In late 2003, Adobe opened its Digital Media Store, with titles in PDF from major publishers (HarperCollins, Random House, Simon & Schuster) as well as newspapers and magazines (The New York Times, Popular Science, etc.). Adobe also created Adobe eBooks Central as a service to read, publish, sell and lend e-books, and the Adobe eBook Library as a prototype digital library. After being a proprietary format, PDF was officially released as an open standard in July 2008, and published by the International Organization for Standardization (ISO) as ISO 32000-1:2008.

With so many proprietary formats showing up in the late 1990s, the digital publishing industry worked on a standard format, and released in September 1999 the Open eBook (OeB), based on XML and defined by OeBPS (Open eBook Publication Structure). The Open eBook Forum was created in January 2000 to develop the OeB format and the OeBPS specifications. Since then, most e-book formats have derived from — or are compatible with — the OeB format, for example the PRC format (Mobipocket) or the LIT format (Microsoft). The Open eBook Forum became the International Digital Publishing Forum (IDPF) in April 2005. The OeB format was replaced in 2007 by the EPUB (Electronic Publication) format as a global standard for e-books, designed for reflowable content on any device.

Mobipocket was founded in March 2000 in Paris by Thierry Brethes and Nathalie Ting as a company specializing in e-books for PDAs. The Mobipocket format (PRC) and the Mobipocket Reader could be used on any PDA in 2000, on any computer from 2002 and on any smartphone later on. The Mobipocket Reader was available in five languages (French, English, German, Spanish, Italian) in 2003. 6,000 titles in several languages were sold either on Mobipocket’s eBookStore or on partners’ online bookstores. Mobipocket (format, software and e-books) was bought by Amazon in 2005 before it launched the Kindle in November 2007.

There were 17 million PDAs and 100,000 e-readers worldwide in April 2001, according to a Seybold Report. 36.8 percent of all PDAs were Palm Pilots in 2002. Competitors were Microsoft’s Pocket PC and the PDAs sold by Sony, Hewlett-Packard, Handspring, Toshiba and Casio. The main platforms were Palm OS (for 55 percent of PDAs) and Pocket PC (for 25.7 percent of PDAs). Sales began to drop in 2004, Sony stopped selling PDAs in 2005, and people began buying smartphones. 9 percent of all cell phones were smartphones in 2006 (3.7 percent in 2004). Apple’s iPhone was launched in January 2007 in the United States, in late 2007 in Europe and in 2008 in Asia.

How about a book-sized electronic device that could store many books at once? The first e-readers were developed in Silicon Valley. The Rocket eBook was launched in 1998 by NuvoMedia, with Barnes & Noble and Bertelsmann as its investors. The SoftBook Reader was launched a few months later by SoftBook Press, with Random House and Simon & Schuster as its investors. Both had a black and white LCD screen, could hold ten e-books, ran on batteries, and connected to the internet through a computer (for the Rocket eBook) or directly with a built-in modem (for the SoftBook Reader) to download e-books from the companies’ bookstores. Other models were the EveryBook Reader launched by EveryBook in 1999, the Millennium eBook launched by Librius the same year, the Gemstar eBook launched by Gemstar in 2000, and the Cybook launched by Cytale in 2001.

LCD screens were replaced by E Ink screens. Sony launched the LIBRIe in 2004 in Japan as the first e-reader with a 6-inch E Ink screen, a 10 MB memory and a 500-e-book capacity. E-books were downloaded from a computer through a USB port. Sony launched the Sony Reader in 2006 in the United States, with a E Ink screen that gave “an excellent reading experience very close to that of real paper, making it very easy going on the eyes” (Michael Cook, editor of epubBooks.com). The Sony Reader was the first e-reader to use Adobe Digital Editions to adapt content to the screen, and was available in Canada, the United Kingdom, Germany and France a few months later. Bookeen launched the Cybook Gen3 in 2007 in Europe. Amazon launched the Kindle the same year, with books bought and downloaded via the device’s 3G wireless connection. Barnes & Noble launched the Nook in 2009, and Apple launched the iPad in 2010.

PDF or EPUB? Marc Autret, a developer and graphic designer, wrote in April 2011 in an email interview: “I do regret that the emergence of EPUB has led to the obliteration of PDF as a format for e-books. The fact that interactivity within a PDF cannot be displayed by the current mobile platforms has removed any possibility of experimenting new things, that had seemed very promising to me. While print publishing offers many different objects, ranging from the carefully designed art book to the basic book for everyday reading, the e-book market has grown from the start on a totalitarian and segregationist mode, comparable to a war between operating systems, instead of favouring a technical and cultural emulation. There are now few PDF e-books exploring the opportunities given by this format.”

“In the unconscious collective mind, PDF has stayed a kind of static duplicate of the printed book, and nobody wants to see anything else. The EPUB format, which is nothing but a combination of XHTML/CSS (admittedly with JavaScript prospects), consists in putting e-books ‘in phase with’ the web. This technology has favoured structured content, but hasn’t favoured typographic craft at all. It has given a narrow vision of digital work, reducing it to a flow of information. We don’t measure it yet, but the worst cultural disaster in recent decades has been the advent of XML as a language that pre-calibrates and contaminates the way we think our hierarchies. XML and its avatars continue to lock us in the cultural invariants of the western world.”


About authors

Murray Suid is a writer of educational books living in Silicon Valley, California. He also writes children books and multimedia products and screenplays. He wrote in September 1998 in an email interview: “The internet serves other print media. My recently published book would not have been done prior to the invention of email because it would have cost too much in money/time to locate the experts. So the internet is a powerful research tool for writers of books, articles, etc. The internet has become my major research tool, largely — but not entirely — replacing the traditional library and even replacing person-to-person research. Now, instead of phoning people or interviewing them face to face, I do it via email. Because of speed, it has also enabled me to collaborate with people at a distance, particularly on screenplays. (I’ve worked with two producers in Germany.) Also, digital correspondence is so easy to store and organize, I find that I have easy access to information exchanged this way. Thus, emailing facilitates keeping track of ideas and materials. The internet has increased my correspondence dramatically. Like most people, I find that email works better than snail mail. My geographic range of correspondents has also increased — extending mainly to Europe. In the old days, I hardly ever did transatlantic penpalling. I also find that emailing is so easy, I am able to find more time to assist other writers with their work — a kind of a virtual writing group. This isn’t merely altruistic. I gain a lot when I give feedback. But before the internet, doing so was more of an effort.”

Murray Suid was among the first authors to offer a web extension to his books: “In a time of great change, many ‘facts’ don’t stay factual for long. In other words, many books go quickly out of date. But if a book can be web-extended (living partly in cyberspace), then an author can easily update and correct it, whereas otherwise the author would have to wait a long time for the next edition, if indeed a next edition ever came out. I do not know if I will publish books on the web — as opposed to publishing paper books. Probably that will happen when books become multimedia. (I am currently helping develop multimedia learning materials, and it is a form of teaching that I like a lot — blending text, movies, audio, graphics, and when possible interactivity).”

“Also, in terms of marketing, the web seems crucial, especially for small publishers that can’t afford to place ads in major magazines and on the radio. Although large companies continue to have an advantage, in cyberspace small publishers can put up very competitive marketing efforts. We think that paper books will be around for a while, because using them is habitual. Many readers like the feel of paper, and the ‘heft’ of a book held in the hands or carried in a purse or backpack. I haven’t yet used a digital book, and I think I might prefer one — because of ease of search, because of colour, because of sound, etc. Obviously, multimedia ‘books’ can be easily downloaded from the web, and such books probably will dominate publishing in the future. Not yet though. I would also like to have direct access to text — digitally read books in the Library of Congress, for example, just as now I can read back issues of many newspapers. Currently, while I can find out about books online, I need to get the books into my hands to use them. I would rather access them online and copy sections that I need for my work, whereas today I either have to photocopy relevant pages, or scan them in, etc. I expect that soon I will use the internet for video telephoning, and that will be a happy development.”

Murray Suid added in August 1999: “In addition to ‘web-extending’ books, we are now web-extending our multimedia (CD-ROM) products — to update and enrich them.” He added in October 2000: “Our company — EDVantage Software — has become an internet company instead of a multimedia (CD-ROM) company. We deliver educational material online to students and teachers.”

The internet is a character in itself in Alain Bron’s second novel “Sanguine sur Toile” (Sanguine on the Net), available in print from Éditions du Choucas in 1999, and as an e-book (PDF) from Éditions 00h00 in 2000.

Alain Bron, an information systems consultant and writer, wrote in November 1999 in an email interview: “In French, ‘toile’ means the web as well as the canvas of a painting, and ‘sanguine’ is the red chalk of a drawing as well as an adjective derived from blood (‘sang’ in French). But would a love of colours justify a murder? ‘Sanguine sur Toile’ is the strange story of an internet user caught up in an upheaval inside his own computer, which is being remotely operated by a very mysterious person whose only aim is revenge. I wanted to take the reader into the worlds of painting and business, which intermingle, escaping and meeting up again in the dazzle of software. The reader is invited to try to untangle for himself the threads twisted by passion alone. To penetrate the mystery, he will have to answer many questions. Even with the world at his fingertips, isn’t the internet user the loneliest person in the world? As for competition, what is the greatest degree of violence possible in a company these days? Does painting tend to reflect the world or does it create another one? I also wanted to show that images are not that peaceful. You can use them to take action, even to kill.”

“In my novel, the internet is a character in itself. Instead of being described in its technical complexity, it is depicted as a character that can be either threatening, kind or amusing. Remember the computer screen has a dual role — displaying as well as concealing. This ambivalence is the theme throughout. In such a game, the big winner is of course the one who knows how to free himself from the machine’s grip and put humanism and intelligence before everything else.”

Alain Bron explained in the same email interview: “I spent about 20 years at Bull. There I was involved in all the adventures of computer and telecommunications development. I represented the computer industry at ISO and chaired the network group of the X/Open consortium. I also took part in the very beginning of the internet with my colleagues of Honeywell in the U.S. in late 1978. I am now an information systems consultant, where I keep the main computer projects of firms and their foreign subsidiaries running smoothly. And I write. I have been writing since I was a teenager. Short stories (about 100), psycho-sociological essays, articles and novels. It is an inner need as well as a very great pleasure.”

“The important thing in the internet is the human value that is added to it. The internet can never be shrewd about a situation, take a risk or replace the intelligence of the heart. The internet simply speeds up the decision-making process and reduces uncertainty by providing information. We still have to leave time to time, let ideas mature and bring an essential touch of humanity to a relationship. For me, the aim of the internet is meeting people, not increasing the number of electronic exchanges.”

Jean-Paul, a writer and musician living in Paris, switched from being a print author to being a hypermedia author, and searched how hyperlinks could expand his writing towards new directions. He created the website Cotres.net in October 1998 to offer various works he called “cotres” (cutters). As recalled by Jean-Paul in June 2000: “Cutters were small sturdy naval vessels with a single mast, that seemed to cut through the water, hence their name. They were an important part of naval fleets because they were quick and easy to operate. They were the favourite boats of pirates, smugglers… and maritime postal workers. How light-headed we felt when we received our first message… coming from Canada. 10,000 (?) years after the Inuits, our cutters had just discovered America!”

Jean-Paul wrote in the same email interview: “The internet allows me to do without intermediaries, such as record companies, publishers and distributors. Most of all, it allows me to crystallize what I have in my head: the print medium (desktop publishing, in fact) only allows me to partly do that. Surfing the web is like radiating in all directions (I am interested in something and I click on all the links on a home page) or like jumping around (from one click to another, as the links appear). You can do this in the written media, of course. But the difference is striking. So the internet didn’t change my life, but it did change how I write. You don’t write the same way for a website as you do for a script or a play.”

“In fact, it wasn’t exactly the internet that changed my writing, it was the first model of the Mac. I discovered it when I was teaching myself HyperCard. I still remember how astonished I was during my month of learning about buttons and links and about surfing by association, objects and images. Being able, by just clicking on part of the screen, to open piles of cards, with each card offering new buttons and each button opening onto a new series of them. In short, learning everything about the web that today seems really routine was a revelation for me. I have heard that Steve Jobs and his team had the same kind of shock when they discovered the forerunner of the Mac in the laboratories of Rank Xerox.”

“Since then I have been writing directly on the screen. I use a paper print-out only occasionally, to help me fix up an article, or to give somebody who doesn’t like screens a rough idea, something immediate. It is only an approximation, because print forces us into a linear relationship: the words scroll out page by page most of the time. But when you have links, you have got a different relationship to time and space in your imagination. And for me, it is a great opportunity to use this reading/writing interplay, whereas leafing through a book gives only a suggestion of it — a vague one because a book is not meant for that.”

Jean-Paul insisted on the growing interaction between digital literature and technology: “The future of cyber-literature, techno-literature or whatever you want to call it, is set by the technology itself. It is now impossible for an author to handle all by himself the words and their movement and sound. A decade ago, you could know well each of Director, Photoshop or Cubase (to cite just the better-known software), using the first version of each. That is not possible any more. Now we have to know how to delegate, find more solid financial partners than Gallimard, and look in the direction of Hachette-Matra, Warner, the Pentagon and Hollywood. At best, the status of the, what… multimedia director? will be the one of video director, film director, the manager of the product. He is the one who receives the golden palms at Cannes, but who would never have been able to earn them just on his own. As twin sister (not a clone) of the cinematograph, cyber-literature (video + the link) will be an industry, with a few isolated craftsmen on the outer edge (and therefore with below-zero copyright).”


About best-sellers

Stephen King, known as the American master of thrillers, was the first author to experiment digital editions in 2000. Best-selling authors Frederick Forsyth and Arturo Pérez-Reverte made similar experiments in the United Kingdom and in Spain.

Stephen King first distributed in March 2000 his unpublished 66-page short story “Riding the Bullet” as an electronic file. During the first day it was downloaded 400,000 times in the e-bookstores that sold it for US$2.5. Stephen King created a dedicated website in July 2000 to self-publish his epistolary novel “The Plant” in episodes. The chapters were available at regular intervals and could be downloaded in several formats (PDF, OeB, HTML, TXT). Readers were charged $1 for each of the first three chapters (5,000 characters each), and $2 for chapter four and chapter five (10,000 characters each). Stephen King stopped the experiment in December 2000 after offering chapter six for free, because more and more readers were downloading the files without paying for them. “The Plant” was published in print later on.

Stephen King went on with digital experiments, but in partnership with his publisher. In March 2001, “Dreamcatcher” was the first novel to be published both in print by Simon & Schuster and as an e-book in Palm’s digital bookstore. In March 2002, his collection of short stories “Everything’s Eventual” was published both in print by Scribner and as an e-book in Palm’s digital bookstore, with an excerpt that could be freely downloaded.

Frederick Forsyth, known as the British master of thrillers, decided to publish his new short stories in partnership with Online Originals, an electronic publisher based in London. “The
Veteran” was available online in three formats (PDF, LIT, Glassbook) in November 2000, as the first part of “Quintet”, a collection of five short stories. It was sold for 3.99 pounds by Online Originals and in e-bookstores in the United Kingdom (Alphabetstreet, BOL.com, WHSmith) and in the United States (Barnes & Noble, Contentville, Glassbook). This experiment didn’t last, because sales of the first short story were far below expectations. “Quintet” was published in print.

Arturo Pérez-Reverte was the best-selling author of Alatriste, a series of novels about the adventures of Capitan Alatriste in the 17th century. Three titles were already published in print in 1997, 1998 and 1999. The new title to be released in late 2000 was “El Oro del Rey” (“The King’s Gold”). Arturo Pérez-Reverte partnered with his publisher Alfaguara in November 2000 to publish the novel in digital form for one month, as a PDF that could be downloaded from the web portal Inicia, before the release of the printed book in physical bookstores in December 2000. The PDF could be downloaded for 2.90 euros, instead of 15.10 euros for the printed book. One month later, there were 332,000 downloads, but only 12,000 readers who paid for them. Most readers shared their password on chat forums. While this digital experiment was not a financial success, it was a great marketing tool to launch the printed book.


About libraries

The Helsinki City Library in Finland was the first public library to launch a website in February 1994. According to “Internet and the Library Sphere: Further Progress for European Libraries”, a report published online by the European Commission, 1,000 public libraries from 26 European countries had their own websites in December 1998. The websites ranged from one web page giving the library’s postal address and its opening hours to full websites with access to OPACs (online public access catalogues) and other services. The leading countries were Finland (247 libraries), Sweden (132 libraries). United Kingdom (112 libraries), Denmark (107 libraries), Germany (102 libraries), Netherlands (72 libraries), Lithuania (51 libraries), Spain (56 libraries) and Norway (45 libraries). Newcomers were the Czech Republic (29 libraries) and Portugal (3 libraries). Russia provided a web page with a list of 26 public reference libraries.

Gabriel, which stands for “Gateway and Bridge to Europe’s National Libraries”, was created in January 1997 by the Conference of European National Librarians (CENL) as a trilingual (English, French, German) website. As explained on the website: “Gabriel also recalls Gabriel Naudé, whose ‘Advis pour dresser une bibliothèque’ [‘Instructions Concerning Erecting a Library’] (Paris, 1627) is one of the earliest theoretical works about libraries in any European language and provides a blueprint for the great modern research library. The name Gabriel is common to many European languages and is derived from the Old Testament, where Gabriel appears as one of the archangels or heavenly messengers. He also appears in a similar role in the New Testament and the Quran.”

One year later, Gabriel offered links to the internet services of 38 participating national libraries (Albania, Austria, Belgium, Bulgaria, Croatia, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Liechtenstein, Lithuania, Luxembourg, Macedonia, Malta, Netherlands, Norway, Poland, Portugal, Romania, Russia, San Marino, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom, Vatican City). The internet services were OPACs, national bibliographies, national union catalogues, and indexes for periodicals. A specific section was dedicated to common European projects.

Much later, in summer 2005, Gabriel merged with the website of the European Library (created by CNL in January 2004) to offer a web portal for 43 national libraries. The digital library Europeana was created in November 2008 with two million documents, and offered 10 million documents in September 2010.


About librarians

In “Books in My Life”, published by the Library of Congress in 1985, Robert Downs wrote: “My lifelong love affair with books and reading continues unaffected by automation, computers, and all other forms of the twentieth-century gadgetry.” But automation, computers and the internet eased the work of many librarians, for example Peter Raggett at OECD (Organisation for Economic Cooperation and Development) and Bruno Didier at the Pasteur Institute in Paris, who helped their patrons find the information they needed at a time when web search engines were less accurate.

The OECD Library was among the first ones in Europe to set up an extensive intranet for its staff. The core of OECD’s original members had expanded from Europe and North America to include Japan, Australia, New Zealand, Finland, Mexico, the Czech Republic, Hungary, Poland and Korea. The library offered 60,000 monographs and 2,500 journals in 1999, as well as microfilms and CD-ROMs, and subscriptions to databases such as Dialog, Lexis-Nexis and UnCover.

Peter Raggett, the library head, first worked in government libraries in the United Kingdom before joining the OECD in 1994. He wrote in August 1999 in an email interview: “At the OECD Library we have collected together several hundred websites and have put links to them on the OECD intranet. They are sorted by subject and each site has a short annotation giving some information about it. The researcher can then see if it is possible that the site contains the desired information. This is adding value to the site references and in this way the Central Library has built up a virtual reference desk on the OECD network. As well as the annotated links, this virtual reference desk contains pages of references to articles, monographs and websites relevant to several projects currently being researched at the OECD, network access to CD-ROMs, and a monthly list of new acquisitions. The Library catalogue will soon be available for searching on the intranet. The reference staff at the OECD Library uses the internet for a good deal of their work. Often an academic working paper will be on the web and will be available for full-text downloading. We are currently investigating supplementing our subscriptions to some of our periodicals with access to the electronic versions on the internet.”

“The internet has provided researchers with a vast database of information. The problem for them is to find what they are seeking. Never has the information overload been so obvious as when one tries to find information on a topic by searching the internet. When one uses a search engine like Lycos or AltaVista or a directory like Yahoo!, it soon becomes clear that it can be very difficult to find valuable sites on a given topic. These search mechanisms work well if one is searching for something very precise, such as information on a person who has an unusual name, but they produce a confusing number of references if one is searching for a topic which can be quite broad. Try and search the web for Russia *and* transport to find statistics on the use of trains, planes and buses in Russia. The first references you will find are freight-forwarding firms that have business connections with Russia.”

“The internet is impinging on many people’s lives, and information managers are the best people to help researchers around the labyrinth. The internet is just in its infancy and we are all going to be witnesses to its growth and refinement. Information managers have a large role to play in searching and arranging the information on the internet. I expect that there will be an expansion in internet use for education and research. This means that libraries will have to create virtual libraries where students can follow a course offered by an institution at the other side of the world. Personally, I see myself becoming more and more a virtual librarian. My clients may not meet me face-to-face but instead will contact me by email, telephone or fax, and I will do the research and send them the results electronically.”

Another experience is that of Bruno Didier, webmaster of the Pasteur Institute Library in Paris. The Pasteur Institutes are observatories for studying infectious and parasite-borne diseases (malaria, tuberculosis, AIDS, yellow fever, dengue, poliomyelitis, and others). Bruno Didier explained in August 1999: “The main aim of the Pasteur Institute Library’s website is to serve the Institute itself and its associated bodies. It supports applications that have become essential in such a big organization: bibliographic databases, cataloguing, ordering of documents, and of course access to online periodicals (more than 100 titles). It is a window for our different departments, at the Pasteur Institute here in Paris but also elsewhere in France and abroad. It is very useful to exchange information with the Pasteur Institutes worldwide. The website has existed in its present form since 1996, and the number of users is steadily increasing.”

“We see a change in our relationship with both the information and the users . We increasingly become mediators, and perhaps to a lesser extent curators. My present activity is typical of this new situation: I provide quick access to information, I create effective means of communication, and I also train people to use these new tools. I think that, in the future, our work will be based on cooperation and on the use of common resources. It is an old wish for librarians, but it is the first time we have the means to realize it.”


About digital libraries

With a digital library, librarians could finally fulfill two goals that used to be in contradiction: preservation (on shelves) and communication (on the internet). People could now leaf through digital facsimiles, and access the original works only when necessary. On the one hand, physical books were taken out of their shelves only once to be scanned. On the other hand, digitized books could easily be accessed anywhere at any time, without the need to go to the library and to struggle through a lengthy process to access the original works, because of reduced opening hours, forms to fill out, safety concerns for rare books, and shortage of staff. Some researchers probably remember the unfailing patience and determination needed to access a rare book or a manuscript.

Brian Lang, chief executive of the British Library, explained in 1998 the purpose of a digital library on its new website: “We do not envisage an exclusively digital library. We are aware that some people feel that digital materials will predominate in libraries of the future. Others anticipate that the impact will be slight. In the context of the British Library, printed books, manuscripts, maps, music, sound recordings and all the other existing materials in the collection will always retain their central importance, and we are committed to continuing to provide, and to improve, access to these in our reading rooms. The importance of digital materials will, however, increase. We recognize that network infrastructure is at present most strongly developed in the higher education sector, but there are signs that similar facilities will also be available elsewhere, particularly in the industrial and commercial sector, and for public libraries. Our vision of network access encompasses all these. The development of the Digital Library [expected in February 1999] will enable the British Library to embrace the digital information age. Digital technology will be used to preserve and extend the Library’s unparalleled collection. Access to the collection will become boundless with users from all over the world, at any time, having simple, fast access to digitized materials using computer networks, particularly the internet.”

The French National Library created its digital library Gallica in October 1997 with the digitized versions of 2,500 books from the 19th century relating to French history, life and culture. When interviewed by journalist Jerome Strazzulla for the daily Le Figaro on 3 June 1998, Jean-Pierre Angremy, president of the library, stated: “We cannot, we will not be able to digitize everything. In the long term, a digital library will only be one element of the whole library.” Gallica quickly became one of the largest digital libraries available on the internet. The books ranged from the Middle Ages to the early 20th century, and were first digitized as bulky image files before being converted into easier-to-use text files.

Many image collections went online, for example 15,000 historical and contemporary images from the National Library of Australia’s Pictorial Collection, including paintings, drawings, rare prints and photographs.

Digital collections became global, with Google Print (renamed Google Books) launched in October 2004, the Open Content Alliance (OCA) launched in October 2005, Europeana launched in November 2008, and the Digital Public Library of America (DPLA) launched in April 2013.

The Open Content Alliance (OCA) started with an idea from the Internet Archive, founded in April 1996 by Brewster Kahle in San Francisco. According to its website in 2007, OCA was “a collaborative effort of a group of cultural, technology, nonprofit, and governmental organizations from around the world that helps build a permanent archive of multilingual digitized text and multimedia material. An archive of contributed material is available on the Internet Archive website and through Yahoo! and other search engines and sites. The OCA encourages access to and reuse of collections in the archive, while respecting the content owners and contributors.”

The project aimed at digitizing public domain books worldwide. Unlike Google Books, the Open Content Alliance only offers public domain books, except when the copyright holder has expressly given permission, and the books can be accessed on any web search engine. The first contributors were the University of California, the University of Toronto, the European Archive, the National Archives of the United Kingdom, O’Reilly Media and Prelinger Archives. The collection included 100,000 e-books in December 2006, 200,000 e-books in May 2007, one million e-books in December 2008, and two million e-books in March 2010.


About treasures of the past

Libraries began digitizing their treasures for the world to enjoy. The British Library digitized Beowulf, the earliest known narrative poem in English. The library holds the only known manuscript of Beowulf, dated circa 1000. The poem itself is much older than the manuscript — it might have been written circa 750. The manuscript was badly damaged by fire in 1731. Early 18th-century transcripts mentioned hundreds of words and characters which crumbled away along the charred edges over the years. To halt this process, each leaf was mounted on a paper frame in 1845.

Researchers around the world have regularly requested access to the manuscript . Taking Beowulf out of its display case for study not only raised conservation issues, but it also made it unavailable for the many visitors of the British Library expecting to see Beowulf on display. Digitizing the manuscript offered a solution to these problems, and new opportunities for researchers and readers worldwide.

Brian Lang, chief executive of the library, explained on its website in 1998: “The Beowulf manuscript is a unique treasure and imposes on the Library a responsibility to scholars throughout the world. Digital photography offered for the first time the possibility of recording text concealed by early repairs, and a less expensive and safer way of recording readings under special light conditions. It also offers the prospect of using image enhancement technology to settle doubtful readings in the text. Network technology has facilitated direct collaboration with American scholars and makes it possible for scholars around the world to share in these discoveries. Curatorial and computing staff learned a great deal which will inform any future programmes of digitization and network service provision the Library may undertake, and our publishing department is considering the publication of an electronic scholarly edition of Beowulf. This work has not only advanced scholarship; it has also captured the imagination of a wider public, engaging people (through press reports and the availability over computer networks of selected images and text) in the appreciation of one of the primary artefacts of our shared cultural heritage.”

Other digitized treasures of the British Library were already available online, for example Magna Carta, a charter agreed by King John of England in 1215 with its Great Seal, and considered the first constitutional text ever; the Lindisfarne Gospels, an illuminated manuscript gospel book produced around the year 700; the Diamond Sutra, dated 868 and “the earliest complete survival of a dated printed book”; the Sforza Hours, a richly illuminated book of hours dated 1490-1520; the Codex Arundel, a bound collection of notes written by Leonardo Da Vinci in 1480-1518; and the Tyndale New Testament, the first English New Testament to be printed in 1526.

The original Gutenberg Bible was available online in November 2000. Gutenberg printed his first Bibles in 1454 or 1455 in Mainz, Germany, perhaps printing 180 copies, with 48 copies still available in 2000, and three copies (two full copies and one partial copy) belonging to the British Library. The two full copies — a little different from each other — were digitized in March 2000 by Japanese experts from the Keio University of Tokyo and NTT (Nippon Telegraph and Telephone Communications). The images were then processed to offer a full digitized version on the web a few months later.

Many libraries digitized their own treasures, for example the Bielefeld University Library in Germany. Michael Behrens, in charge of the digital library project, wrote in September 1998 in an email interview: “We started digitizing rare prints from our own library, and some rare prints that were sent in via library loan in November 1996. In that first phase of our attempts at digitization, starting in November 1996 and ending in June 1997, 38 rare prints were scanned as image files and made available on the web. In the same period, there were also a few digital materials prepared as accompanying material for lectures held at the university (image files as excerpts from printed works). These are, for copyright reasons, not available outside the campus. The next step, which is just being completed, is the digitization of the Berlinische Monatsschrift, a German periodical from the Enlightenment, comprising 58 volumes, with 2,574 articles and 30,626 pages. A larger project to digitize German periodicals from the 18th and early 19th centuries (around one million pages) is planned for soon. These periodicals belong to our library’s collection and to other libraries elsewhere. The digitization project would be coordinated here, and some of the technical work would be done here too.”

The experimental database of the first volume (1751) of the Encyclopédie by Diderot and d’Alembert was available online in 1998 on the website of ARTFL (American and French Research on the Treasury of the French Language), a joint project of the University of Chicago and the French National Center for Scientific Research (CNRS). The database of the first volume was the first step towards a full online version of the 17 volumes of text (with 18,000 pages and 21.7 million words) and 11 volumes of plates of the Encydopédie (1st edition, 1751-72), with 72,000 articles written by 140 contributors (Diderot, d’Alembert, Voltaire, Rousseau, Marmontel, d’Holbach, Turgot, and others). Designed to collect and disseminate the entire knowledge of the time, the Encyclopédie was a reflection of the intellectual and social currents of the Enlightenment, and contributed to disseminate novel ideas that would inspire the French Revolution in 1789.


About library catalogues

The MARC (Machine-Readable Cataloguing) format was created by the International Federation of Library Associations (IFLA) in the early 1970s. As explained in “UNIMARC: An Introduction” in 1999, it was “a short and convenient term for assigning labels to each part of a catalogue record so that it can be handled by computers. While the MARC format was primarily designed to serve the needs of libraries, the concept has since been embraced by the wider information community as a convenient way of storing and exchanging bibliographic data.”

Several versions of MARC emerged over the years because of different national cataloguing practices. With twenty MARC formats (INTERMARC, USMARC, UKMARC, CAN/MARC, etc.), differences in data content meant extensive editing before and after records were exchanged. UNIMARC (Universal Machine-Readable Cataloguing) was published in 1977 by IFLA as a common bibliographic format, with an updated version in 1980, and a handbook in 1983 and 1987. Cataloguers could now process records created in any MARC format. Records in one MARC format were first converted into UNIMARC before being converted into another MARC format. Each national bibliographic agency had to write two programs — one to convert into UNIMARC and one to convert from UNIMARC – instead of having to write many programs for the conversion of each MARC format. UNIMARC was also promoted as a format on its own, and was adopted by several national bibliographic agencies as their in-house format.

A Permanent UNIMARC Committee was created in 1991 to monitor the development of UNIMARC. The European Commission officially promoted UNIMARC in 1996 as its preferred bibliographic format for libraries in the European Union. The British Library (using UKMARC), the Library of Congress (using USMARC) and the National Library of Canada (using CAN/MARC) harmonized their national MARC formats and published their common format MARC 21 in 1999.

Union catalogues thrived. The idea behind a union catalogue was to avoid cataloguing the same document by many cataloguers worldwide. When cataloguers of a member library processed a new document, they first searched the union catalogue. If the record was available, they imported it into their own catalogue, and added the local data. If the record was not available, they created it in their own catalogue and exported it into the union catalogue, for it to be available to all cataloguers of member libraries. The two main global union catalogues (paid subscription) were the RLG Union Catalogue and OCLC’s WorldCat.

Created in 1980 by the Research Libraries Group (RLG), the RLG Union Catalog was first known under the name of RLIN (Research Libraries Information Network). RLIN included the records of 88 million documents held in libraries belonging to RLG member institutions in 1998. There were mainly research and specialized libraries, for example law, technical and corporate libraries. RLIN included several records per document (unlike OCLC’s WorldCat, that only accepted one record per document).

RLG member institutions were for example the Library of Congress, the National Library of Medicine, the U.S. Government Printing Office, CONSER (Conversion of Serials Project), the British Library, the British National Bibliography, and the National Union Catalogue of Manuscript collections. RLIN provided access to special resources, for example the United Nations’ DOCFILE and CATFILE records, and the Rigler Deutsch Index to pre-1950 commercial sound recordings. It provided a catalogue of machine-readable data files, from the full-text French literary works in the ARTFL Database to the statistical data collected by ICPSR (Inter-university Consortium for Political and Social Research) at the University of Michigan. It provided a catalogue of the archival and manuscript collections of research libraries, museums, state archives, and historical societies in North America, with 500,000 records in 1998.

RLIN also hosted the English Short Title Catalogue (ESTC), specializing in English culture, language and literature, with the description of letterpress materials printed in the United Kingdom or its dependencies in any language, from the beginnings of print to 1800, as well as materials printed in English worldwide. Produced by the ESTC editorial offices at the University of California (Riverside) and at the British Library, with the help of the American Antiquarian Society and 1,600 libraries worldwide, ESTC was updated daily as a comprehensive bibliography of the hand-press era and as a census of surviving copies. It included 420,000 records in June 1998, with materials ranging from Shakespeare and Greek New Testaments to anonymous ballads, songs, advertisements, and other ephemera.

RLIN was renamed RLG Union Catalogue in 2003. RLG launched in fall 2003 a free experimental web version of its catalogue named RedLightGreen, with a full version in spring 2004. When the RLG team joined OCLC, RedLightGreen closed its site in November 2006, with a link to OCLC’s WorldCat.

OCLC (Online Computer Library Center) was created in 1967 as a non-profit organization dedicated to “furthering access to the world’s information while reducing information costs”. The OCLC Online Union Catalog, later renamed WorldCat, was launched as a regional computer catalogue for the 54 college and university libraries in the State of Ohio. It became a national union catalogue in the United States before becoming a worldwide union catalogue with an international network of libraries. WorldCat had 38 million records in 400 languages in 1998, with 2 million records added annually. OCLC’s website was available in six languages (English, Chinese, French, German, Portuguese, Spanish). Libraries inside the United States received OCLC services through their OCLC-affiliated regional networks. Libraries outside the United States received OCLC services through OCLC Asia Pacific, OCLC Canada, OCLC Europe, OCLC Latin America and the Caribbean, or via international distributors.

WorldCat included 61 million bibliographic records in 400 languages in 2005, from 9,000 member libraries in 112 countries. It included 73 million bibliographic records in 2006, with links to one billion documents available in these libraries. WorldCat launched the beta version of its new website worldcat.org in August 2006. Member libraries provided free access to their catalogues, and free or paid access to their electronic resources (books, audiobooks, abstracts, full-text articles, photos, music CDs, videos). 1.5 billion documents could be located and/or accessed using WorldCat in April 2010.


About linguistic resources

Travlang offered basic online translation dictionaries for travelers as early as 1995. Michael C. Martin, then a physics student in New York, first created the section “Foreign Languages for Travelers” in 1994 on the website of his university, before launching his website Travlang one year later. He moved to California to work as a researcher in experimental physics at the Lawrence Berkeley National Laboratory, and expanded Travlang on his free time.

He wrote in August 1998 in an email interview: “I think the web is an ideal place to bring different cultures and people together, and that includes being multilingual. Our Travlang site is so popular because of this, and people desire to feel in touch with other parts of the world. The internet is really a great tool for communicating with people you wouldn’t have the opportunity to interact with otherwise. I truly enjoy the global collaboration that has made our ‘Foreign Languages for Travelers’ pages possible. (…) I think computerized full-text translations will become more common, enabling a lot of basic communications with even more people. This will also help bring the internet more completely to the non-English-speaking world.”

The section “Foreign Languages for Travelers” offered online resources to learn 60 languages in 1998, and the section “Translating Dictionaries” gave access to free online dictionaries in 16 languages (Afrikaans, Czech, Danish, Dutch, Esperanto, Finnish, French, Frisian, German, Hungarian, Italian, Latin, Norwegian, Portuguese, Spanish, Swedish). Other sections offered links to translation services, language schools and plurilingual bookstores. People could also book their hotel, car or plane ticket, check exchange rates, and browse an index of language and travel websites. Michael C. Martin sold Travlang in February 1999.

Tyler Chambers, a software developer in Boston, Massachusetts, launched two projects on his free time, the Human-Languages Page in 1994 and the Internet Dictionary Project in 1995. He explained in September 1998 in an email interview: “1994 was the year I was really introduced to the web, which was a little while after its christening but long before it was mainstream. That was also the year I began my first multilingual web project, and there was already a significant number of language-related resources online. This was back before Netscape even existed — Mosaic was almost the only web browser, and web pages were little more than hyperlinked text documents.”

The Human-Languages Page (H-LP) was an index of linguistic internet resources, with links to 1,800 language-related resources in 100 languages in 1998. It merged with the Languages Catalog, a section of the WWW Virtual Library, to become iLoveLanguages in spring 2001. iLoveLanguages provided an index of 2,000 linguistic resources in 100 languages in September 2003.

The Internet Dictionary Project (IDP) was a collaborative project to create free online dictionaries from English to six other languages (French, German, Italian, Latin, Portuguese, Spanish). As explained on its website: “The Internet Dictionary Project began in 1995 in an effort to provide a noticeably lacking resource to the internet community and to computing in general — free translating dictionaries. Not only is it helpful to the online community to have access to dictionary searches at their fingertips via the World Wide Web, it also sponsors the growth of computer software which can benefit from such dictionaries — from translating programs to spelling-checkers to language-education guides and more. By facilitating the creation of these dictionaries online by thousands of anonymous volunteers all over the internet, and by providing the results free-of-charge to anyone, the Internet Dictionary Project hopes to leave its mark on the internet and to inspire others to create projects which will benefit more than a corporation’s gross income. (…) This site allows individuals from all over the world to visit and assist in the translation of English words into other languages. The resulting lists of English words and their translated counterparts are then made available through this site to anyone, with no restrictions on their use.”

Tyler Chambers wrote in the same email interview: “While I’m not multilingual, nor even bilingual, myself, I see an importance to language and multilingualism that I see in very few other areas. The internet has allowed me to reach millions of people and help them find what they’re looking for, something I’m glad to do. Overall, I think that the web has been great for language awareness and cultural issues — where else can you randomly browse for 20 minutes and run across three or more different languages with information you might potentially want to know? Communications media make the world smaller by bringing people closer together; I think that the web is the first (of mail, telegraph, telephone, radio, TV) to really cross national and cultural borders for the average person. I think that the future of the internet is even more multilingualism and cross-cultural exploration and understanding than we’ve already seen. But the internet will only be the medium by which this information is carried; like the paper on which a book is written, the internet itself adds very little to the content of information, but adds tremendously to its value in its ability to communicate that information.”

“To say that the internet is spurring multilingualism is a bit of a misconception, in my opinion — it is communication that is spurring multilingualism and cross-cultural exchange, the internet is only the latest mode of communication which has made its way down to the (more-or-less) common person. Language will become even more important than it already is when the entire planet can communicate with everyone else (via the web, chat, games, email, and whatever future applications haven’t even been invented yet), but I don’t know if this will lead to stronger language ties, or a consolidation of languages until only a few, or even just one remain. One thing I think is certain is that the internet will forever be a record of our diversity, including language diversity, even if that diversity fades away. And that’s one of the things I love about the Internet — it’s a global model of the saying ‘it’s not really gone as long as someone remembers it. And people do remember. (…) As browsers and users mature, I don’t think there will be any currently spoken language that won’t have a niche on the web, from Native American languages to Middle Eastern dialects, as well as a plethora of ‘dead’ languages that will have a chance to find a new audience with scholars and others alike online.”

Tyler Chambers ran out of time to maintain the Internet Dictionary Project, and removed the ability to update the dictionaries in January 2007. Users could still search the dictionaries on the website and download the archived files.

Logos, an italian translation company, decided in December 1997 to make its professional tools freely available on the web for all its translators and for the general public. These professional tools were: (a) the Logos Dictionary, a multilingual dictionary with 7.5 billion words (in fall 1998); (b) the Logos Wordtheque, a multilingual library with 328 million words (extracted from translated novels, technical manuals and other texts) that could be searched by language, word, author or title; (c) the Logos Linguistic Resources, a database of 500 glossaries; and (d) the Logos Universal Conjugator, a database of verbs in 17 languages.

Founded by Rodrigo Vergara in 1979 in Modena, Italy, Logos employed 200 in-house translators and 2,500 free-lance translators who processed around 200 texts per day. When interviewed by journalist Annie Kahn for her article “Les mots pour le dire” (The words to tell it) published by the French daily Le Monde on 7 December 1997, Rodrigo Vergara explained: “We wanted all our translators to have access to the same translation tools. So we made them available on the internet, and while we were at it we decided to make the site open to the public. This made us extremely popular, and also gave us a lot of exposure. This move has in fact attracted many customers, and it has allowed us to widen our network of translators in the wake of this initiative.”

Annie Kahn wrote in the same article: “The Logos site is much more than a mere dictionary or a collection of links to other online dictionaries. The cornerstone is the document search program, which processes a corpus of literary texts available free of charge on the web. If you search for the definition or the translation of a word (‘didactics’, for example), you get not only an answer but also a quote from one of the literary works using the word (in this case, an essay by Voltaire). All it takes is a click of the mouse to access the whole text or even to order the book, including in foreign translations, thanks to a partnership agreement with the famous online bookstore Amazon.com. However, if there is no text using the word in the database, the program acts as a search engine, sending the user to other web sources. For some words, you can even hear the pronunciation. If there is no available translation, the system sends a request to the general public. Everyone can make suggestions, and the translators at Logos check these suggestions to decide if they can validate them into the database.”

Ten years later, in 2007, the Logos Library (formerly Wordtheque) included 710 billion words, Linguistic Resources (no change of name) included 1,215 glossaries, and the Universal Conjugator (formerly Conjugation of Verbs) included verbs in 36 languages.


About dictionaries

The first reference dictionaries were available online in 1996, for free or for a fee. They were based on their printed counterparts before being designed directly for the web.

Merriam-Webster created the website “Merriam-Webster Online: The Language Center” to give free access to the digitized edition of its print publications: Webster Dictionary, Webster Thesaurus, Webster’s Third (a lexical landmark), Guide to International Business Communications, Vocabulary Builder (with interactive vocabulary quizzes), and Barnhart Dictionary Companion (‘hot’ new words). The website helped track down definitions, spellings, pronunciations, synonyms, vocabulary exercises, and other key facts about words and language.

The Dictionnaire UniverseI Francophone en Ligne (Universal French-Language Online Dictionary) was the free online version of the printed dictionary published by Hachette. As a side remark, English and French are the only official and/or cultural languages widely spread on five continents. French was spoken by 500 million people in 47 countries.

The online version (for a subscription fee) of the 20-volume Oxford English Dictionary (OED) was created in March 2000 by Oxford University Press (OUP), with a quarterly update of 1,000 new or revised entries.

Created in September 2000 as a free service on the web, the GDT (Grand Dictionnaire Terminologique — Main Terminological Dictionary) was an event celebrated by many linguists. It was the largest French-English online terminology dictionary ever, with 3 million terms relating to industry, science and commerce. The GDT was designed directly for the web by the Quebec Office of the French Language (OQLF), with a database created and maintained by Semantix. The GDT was used by 1.3 million people during the first month, with peaks of 60,000 visits per day, which certainly contributed to better translations. The database was then maintained by Convera Canada, with 3.5 million visits per month in February 2003. A new version of the GDT went online in March 2003, with the database maintained by OQLF itself, and the addition of Latin as a third language.

Robert Beard, a professor at Bucknell University in Lewisburg, Pennsylvania, co-founded yourDictionary.com in 2000 for “all languages without any exception” after creating A Web of Online Dictionaries (WOD) in 1995. WOD was a directory of online dictionaries and other linguistic resources (thesauri, vocabularies, glossaries, grammars, textbooks), with 800 dictionaries in 150 languages in September 1998. The section Web of Linguistic Fun made linguistics appealing for non-specialists.

Robert Beard wrote in September 1998 in an email interview: “There was an initial fear that the web posed a threat to multilingualism on the web, since HTML and other programming languages are based on English and since there are simply more websites in English than any other language. However, my website indicates that multilingualism is very much alive and the web may, in fact, serve as a vehicle for preserving many endangered languages. Moreover, the new attention paid by browser developers to the different languages of the world will encourage even more websites in different languages.”

He added in January 2000: “A Web of Online Dictionaries (WOD) is now part of yourDictionary.com. The new website is an index of 1,200+ dictionaries in more than 200 languages. Besides the WOD, the new website includes a word-of-the-day feature, word games, a language chat room, the old Web of Online Grammars (now expanded to include additional language resources), the Web of Linguistic Fun, multilingual dictionaries, specialized English dictionaries, thesauri and other vocabulary aids, language identifiers and guessers, and dictionary indices. yourDictionary.com will hopefully be the premier language portal and the largest language resource site on the web. It is now actively acquiring dictionaries and grammars of all languages with a particular focus on endangered languages. It is overseen by a blue ribbon panel of linguistic experts from all over the world.”

“yourDictionary.com has lots of new ideas. We will have language chat rooms and bulletin boards. There will be language games designed to entertain and teach fundamentals of linguistics. The Linguistic Fun page will become an online journal for short, interesting, yes, even entertaining, pieces on language that are based on sound linguistics by experts from all over the world. We plan to work with the Endangered Language Fund in the U.S. and Britain to raise money for the Foundation’s work and publish the results on our site in the Endangered Language Repository. Languages that are endangered are primarily languages without writing systems at all (only 1/3 of the world’s 6,000+ languages have writing systems). I still do not see the web contributing to the loss of language identity and still suspect it may, in the long run, contribute to strengthening it. More and more Native Americans, for example, are contacting linguists, asking them to write grammars of their language and help them put up dictionaries. For these people, the web is an affordable boon for cultural expression.”

Michael Kellogg created WordReference.com in 1999, and explained on its website: “I started this site as an effort to provide free online bilingual dictionaries and tools to the world. The site has grown gradually ever since to become one of the most used online dictionaries, and the top online dictionary for its language pairs of English-Spanish, English-French, English-Italian, Spanish-French, and Spanish-Portuguese. It is consistently ranked in the top 500 most visited websites in the world. I am proud of my history of innovation with dictionaries on the internet. Many of the features such as being able to click any word in a dictionary entry were first implemented by me.”

“The internet has done an incredible job of bringing the world together in the last few years. Of course, one of the greatest barriers has been language. Much of the content is in English and many, many users are reading English-language web pages as a second language. I know from my own experience with Spanish-language websites that many readers probably understand much of what they are reading, but not every single word. Today, I have three main goals with my website. First, continue to create free online bilingual dictionaries from English to many other languages. I strive to offer translations for *all* English words, terms, idioms, sayings, etc. Second, provide the world’s best language forums; and third, continue to innovate to produce the best website and tools for the world.”

In 2010, WordReference offered an English monolingual dictionary, and dictionaries from English to other languages (Arabic, Chinese, Czech, Greek, Japanese, Korean, Polish, Portuguese, Romanian, Turkish), and vice versa. It offered a Spanish monolingual dictionary, a Spanish dictionary of synonyms, a Spanish-French dictionary and a Spanish-Portuguese dictionary. There was a monolingual dictionary for German, and another one for Russian. Conjugation tables were available for French, Italian and Spanish. WordReference Mini was a miniature version of the site to be embedded into other sites, for example sites teaching languages online. There was a mobile device version for dictionaries from English to French, English to Italian and English to Spanish, and vice versa, with other language pairs planned for later.


About encyclopedias

The first reference encyclopedias were available online in late 1999, for free or for a fee. They were based on their printed counterparts before been designed directly for the web.

Britannica.com was created in December 1999 as the digital equivalent of the 32 volumes of the Encyclopaedia Britannica’s 15th edition. Britannica.com was available for free, as a complement to the printed and CD-ROM editions for sale. It offered links to articles (from 70 magazines), to websites, to books, etc., all searchable through a single search engine. Britannica.com joined the top 100 websites in the world in September 2000. It switched from being free to being accessed for a monthly or yearly subscription fee in July 2001. It opened its website to external contributors in 2009, with registration required to write and edit articles.

Created in December 1999, WebEncycIo was the first main French-language online encyclopedia available for free, with a content based on the printed encyclopedia published by Éditions Atlas. WebEncycIo was searchable by keyword, topic and media (maps, links, photos, illustrations). A call for papers invited specialists to become external contributors and submit their articles in the section WebEncycIo Contributif. Later on, a free registration was required to use the online encyclopedia.

The website of the printed Encyclopedia Universalis — the French-language sister of Encyclopaedia Britannica — was also created in December 1999. It included 28,000 articles by 4,000 contributors, available for an annual subscription fee, with a number of articles available for free.

Two years after the 20-volume Oxford English Dictionary’s online version, the Oxford University Press (OUP) created in March 2002 Oxford Reference Online (ORO), a comprehensive encyclopedia designed directly for the web. According to its publisher, its 60,000 web pages and one million entries (available for a subscription fee) were the equivalent of 100 printed encyclopedias.

Wikipedia was launched in January 2001 by Jimmy Wales and Larry Sanger (Larry Sanger resigned later on) as a free global collaborative online encyclopedia. One year later, it offered 20,000 articles in 18 languages, that could be freely reused under a GFDL license, replaced by a Creative Commons license later on. Wikipedia quickly became the largest reference website, with thousands of people contributing worldwide. It offered 1.3 million articles by 13,000 contributors in 100 languages in 2004. It was in the top ten websites worldwide in 2006, with 6 million articles in 250 languages. It offered 7 million articles in 192 languages in May 2007, including 1.8 million articles in English, 589,000 articles in German, 500,000 articles in French, 260,000 articles in Portuguese, and 236,000 articles in Spanish. It was in the top five websites worldwide in 2008. It offered 14 million articles in 272 languages in September 2010, including 3.4 million articles in English, 1.1 million articles in German, and 1 million articles in French. Wikipedia celebrated its tenth anniversary in January 2011 with 17 million articles in 270 languages, et 400 million individual visits per month for all the Wikimedia websites (Wikipedia, Wiktionary, Wikibooks, Wikiquote, Wikisource, Wikipedia Commons, Wikispecies, Wikinews, Wikiversity).

Citizendium, which stands for “The Citizen’s Compendium”, was created in March 2007 by Larry Sanger as a pilot project to build a free global collaborative online encyclopedia led by experts. Larry Sanger, who co-founded Wikipedia in January 2001, had resigned later on over policy and content quality issues, and the use of anonymous pseudonyms for contributors. Citizendium wanted to combine “public participation with gentle expert guidance” in a project that was experts-led, but not experts-only. Contributors used their own names, and were guided by expert editors. Constables made sure the rules are respected.

As explained by Larry Sanger in his essay “Toward a New Compendium of Knowledge” on the website of Citizendium: “Editors will be able to make content decisions in their areas of specialization, but otherwise working shoulder-to-shoulder with ordinary authors.” There were 1,100 articles from 820 authors and 180 editors in March 2007, 11,800 articles in August 2009, and 15,000 articles in September 2010. Citizendium also wanted to act as a prototype for large scale knowledge-building projects that would deliver reference scholarly and educational content.

The Encyclopedia of Life was created in May 2007 as a global scientific effort to document all known species of animals and plants. There were 1.8 million species, including endangered species, and probably 6 to 8 million species yet to be discovered and catalogued. This collaborative effort was led by several major institutions (Field Museum of Natural History, Harvard University, Marine Biological Laboratory, Missouri Botanical Garden, Smithsonian Institution, Biodiversity Heritage Library).

The encyclopedia’s honorary chair was Edward Wilson, professor emeritus at Harvard University, who, in an essay dated 2002, was the first to express the wish for such an encyclopedia as a single portal for millions of documents scattered online and offline. Technology improvements made it possible five years later, with content aggregators, mash-up, wikis and large-scale content management to process texts, photos, maps, sound and videos. The first pages of the encyclopedia were available in mid-2008, with one web page for each species. The English version was expected to be translated in several languages by partner organizations.


About journals

The Public Library of Science (PLOS), founded in October 2000, first advocated for scientific journals to be freely available in online archives. PLOS created a non-profit scientific and medical publishing venture in early 2003 to provide scientists and physicians with free high-quality, high-profile online journals in which they could publish their work. The journals were PLOS Biology (2003), PLOS Medicine (2004), PLOS Genetics (2005), PLOS Computational Biology (2005), PLOS Pathogens (2005), PLOS Clinical Trials (2006), and PLOS Neglected Tropical Diseases (2007), the first scientific journal on this topic. All PLOS articles are freely available online, on the websites of PLOS and in PubMed Central, the public archive of the National Library of Medicine. The articles can be freely redistributed and reused under a Creative Commons license, including for translations, as long as the author(s) and source are cited. PLOS also created PLOS ONE, a journal for any scientific or medical article on any topic.

PLOS received financial support from several foundations while developing a viable economic model from fees paid by published authors, advertising, sponsorship, and paid activities organized for PLOS members. Three years after their creation, PLOS Biology and PLOS Medicine had the same reputation of excellence as the leading fee-based scientific journals Nature, Science and The New England Journal of Medicine. PLOS’ goal was also to encourage other publishers to adopt the open access model, or to convert their existing journals to an open access model.

The Budapest Open Access Initiative (BOAI) was signed in February 2002 as the founding text of the open access movement, available in several languages. “By ‘open access’ to [research] literature, we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited.”

The Directory of Open Access Journals (DOAJ) was created in 2003 as a directory of open access scholarly and scientific journals in any field and language, with 9,800 journals on 29 December 2013, 10,068 journals on 15 November 2014 (with half of them searchable at article level), 10,224 journals on 18 February 2015, and 10,459 journals from 134 countries on 20 June 2015. With a new website launched in December 2013, DOAJ became the authoritative source for users of quality open access journals who were either peer reviewed or using a quality control system. Their inclusion in DOAJ increased their visibility and impact. DOAJ’s goal was also to encourage best practices amongst open access publishers.

Major universities created their open archive, for example DASH (Digital Access to Scholarship at Harvard) created by Harvard and DSpace@MIT created by the Massachusetts Institute of Technology (MIT). DASH was created in September 2009 as an open access repository for members of the Harvard community, for their work to be freely accessed worldwide, with 4.1 million downloads in November 2014, 4.7 million downloads in February 2015, and 5.5 million downloads in June 2015. DSpace@MIT is an open access repository for peer-reviewed articles, technical reports, working papers, theses and more, with 60,000+ items in December 2013 and one million end-user downloads per month, and with 70,000+ items in February 2015.


About resources for teaching

When interviewed by the French daily Libération in January 1998, Vinton Cerf, co-inventor of internet protocols and founder of the Internet Society (ISOC), explained that the internet was doing two things. First, it provided some information, like a book. Second, information was connected with other information while, in a book, information stayed isolated.

More and more computers connected to the internet were available in schools and at home in the mid-1990s, and teachers began exploring new ways of teaching. During a conference organized in September 1996 by the International Federation of Information Processing (IFIP), Dale Spender, an Australian scholar and teacher, gave a lecture on “Creativity and the Computer Education Industry”.

“Throughout print culture, information has been contained in books — and this has helped to shape our notion of information. For the information in books stays the same — it endures. And this has encouraged us to think of information as stable — as a body of knowledge which can be acquired, taught, passed on, memorized, and tested of course. The very nature of print itself has fostered a sense of truth; truth too is something which stays the same, which endures. And there is no doubt that this stability, this orderliness, has been a major contributor to the huge successes of the industrial age and the scientific revolution. (…) But the digital revolution changes all this. Suddenly it is not the oldest information — the longest lasting information that is the most reliable and useful. It is the very latest information that we now put the most faith in — and which we will pay the most for.”

“Education will be about participating in the production of the latest information. This is why education will have to be ongoing throughout life and work. Every day there will be something new that we will all have to learn. To keep up. To be in the know. To do our jobs. To be members of the digital community. And far from teaching a body of knowledge that will last for life, the new generation of information professionals will be required to search out, add to, critique, ‘play with’, and daily update information, and to make available the constant changes that are occurring.”

Robert Beard, professor at Bucknell University, in Lewisburg, Pennsylvania, wrote in September 1998 in an email interview: “The web represents a plethora of new resources produced by the target culture, new tools for delivering lessons (interactive Java and Shockwave exercises) and testing, which are available to students any time they have the time or interest — 24 hours a day, 7 days a week. It is also an almost limitless publication outlet for my colleagues and I, not to mention my institution. Ultimately all course materials, including lecture notes, exercises, moot and credit testing, grading, and interactive exercises will be far more effective in conveying concepts that we have not even dreamed of yet.” After creating A Web of Online Dictionaries (WOD) in 1995, Robert Beard co-founded yourDictionary.com in 2000 as a web portal for many dictionaries and linguistic tools.

The C&TI (Communications & Information Technology) Centre at the Language Institute of the University of Hull, United Kingdom, was created to provide information on how computer-assisted language learning could be integrated into existing courses. It also provided support to language teachers who were using — or who wished to use — computers in their teaching. According to June Thompson, manager of the C&TI Centre, interviewed in December 1998: “The internet has the potential to increase the use of foreign languages. The use of the internet has brought an enormous new dimension to our work of supporting language teachers in their use of technology in teaching. I suspect that for some time to come, the use of internet-related activities for languages will continue to develop alongside other technology-related activities (e.g. use of CD-ROMs — not all institutions have enough networked hardware). In the future I can envisage use of internet playing a much larger part, but only if such activities are pedagogy driven.”

Russon Wooldridge, professor at the Department of French Studies of the University of Toronto, Canada, wrote in February 2001: “My research, conducted once in an ivory tower, is now almost exclusively done through local or remote collaborations. All my teaching makes the most of internet resources (web and email): the two common places for a course are the classroom and the website of the course, where I put all course materials. I have published all my research data of the last twenty years on the web (reedition of books, articles, texts of old dictionaries as interactive databases, 16th-century treaties, etc.). I publish proceedings of symposiums. I publish a journal. In May 2000, I organized an international symposium in Toronto on French studies enhanced by new technologies. I realize that without the internet I wouldn’t have as many activities, or at least they would be very different from the ones I have today. So I don’t see the future without them.”

The Massachusetts Institute of Technology (MIT) launched its OpenCourseWare (OCW) in September 2003 to put all its course materials online for free, after creating a pilot version one year earlier with 32 course materials. The MIT OpenCourseWare offered 500 course materials in March 2004, 1,400 course materials in May 2006, and all 1,800 course materials in November 2007. Course materials are regularly updated, and some of them are translated into Spanish, Portuguese and Chinese with the help of other organizations. An OpenCourseWare Consortium (later renamed Open Education Consortium) was created in November 2005 as a global common online project for the course materials of other educational institutions. One year later, it included the course materials of 100 universities worldwide.


About resources for translators

The internet became “a vital and endless source of information” for translators, according to Marcel Grangier, head of the French Section of the Swiss government’s Central Linguistic Services. He explained in January 1999 in an email interview: “To work without the internet is simply impossible now. On top of all the tools we use (email, online press, services for translators), the internet is for us a vital and endless source of information in what I would call the ‘non-structured sector’ of the web. For example, when the answer to a translation problem cannot be found on websites presenting information in an organized way, in most cases search engines allow us to find the missing link somewhere on the network.”

The team of French-language translators also maintained the web page Dictionnaires Électroniques (Electronic Dictionaries), with links to monolingual dictionaries, bilingual dictionaries, multilingual dictionaries, abbreviations and acronyms, and geographical data. Marcel Grangier explained in January 2000: “Our website was first conceived as an intranet service for translators in Switzerland, who often deal with the same kind of material as the translators of the Federal government. Some parts of our website are useful to all translators, wherever they are. The section Dictionnaires Électroniques is only one section of the website. Other sections deal with public administration, legislation, the French language, and general information. Our website also hosts the pages of the Conference of Translation Services of European States (COTSOES).” COTSOES created its own website in 2001, and Dictionnaires Électroniques was transferred to the new website.

Maria Victoria Marinetti, a Mexican engineer and translator, wrote in August 1999: “I have access to a large number of global information, which is very interesting for me. I can also regularly send or receive files back and forth. The internet allows me to receive or send general and technical translations from French into Spanish, and vice versa, and to correct texts in Spanish. In the technical or chemical fields, I offer technical assistance, as well as information on exporting high-tech equipment to Mexico or to other Latin American countries.”

The magazine Language Today was created by Praetorius, a language consultancy based in London, for linguists worldwide (translators, interpreters, terminologists, lexicographers, technical writers), with a print version and an online version. Geoffrey Kingscott, director of Praetorius, explained in September 1998: “We publish the print version of Language Today only in English, the common denominator of our readers. When we use an article which was originally in a language other than English, or report an interview which was conducted in a language other than English, we translate it into English and publish only the English version. This is because the number of pages we can print is constrained, governed by our customer base (advertisers and subscribers). But for our web edition we also give the original version.”

Linguists have mailing lists to connect across borders and languages, for example the Linguist List, created by Anthony Rodrigues Aristar in 1990 at the University of Western Australia. The list moved to Texas A&M University in 1991, with Eastern Michigan University as the main editing site (in 1991), and Wayne State University in Michigan as the second editing site (in 1998). The list started its own website in 1997, and became a component of the WWW Virtual Library for linguistics.

Helen Dry, moderator of the Linguist List since 1991, explained in August 1998 in an email interview: “The Linguist List, which I moderate, has a policy of posting in any language, since it is a list for linguists. However, we discourage posting the same message in several languages, simply because of the burden extra messages put on our editorial staff. (We are not a bounce-back list, but a moderated one. So each message is organized into an issue with like messages by our student editors before it is posted.) Our experience has been that almost everyone chooses to post in English. But we do link to a translation facility that will present our pages in any of five languages. We also try to have at least one student editor who is genuinely multilingual, so that readers can correspond with us in languages other than English.”

She added in July 1999: “We are beginning to collect some primary data. For example, we have searchable databases of dissertation abstracts relevant to linguistics, of information on graduate and undergraduate linguistics programs, and of professional information about individual linguists. The dissertation abstracts collection is, to my knowledge, the only freely available electronic compilation in existence.”


About terminology databases

NetGlos, which stands for Multilingual Glossary of Internet Terminology, was created in 1995 by the Worldwide Language Institute (WWLI) as an online voluntary collaborative project. NetGlos was available in 13 languages (Chinese, Croatian, Dutch/Flemish, English, French, German, Greek, Hebrew, Italian, Maori, Norwegian, Portuguese, Spanish) three years later.

Brian King, director of the Worldwide Language Institute, explained in September 1998 in an email interview: “Much of the technical terminology on the web is still not translated into other languages. And as we found with our Multilingual Glossary of Internet Terminology — known as NetGlos — the translation of these terms is not always a simple process. Before a new term becomes accepted as the ‘correct’ one, there is a period of instability where a number of competing candidates are used. Often an English loan word becomes the starting point — and in many cases the endpoint. But eventually a winner emerges that becomes codified into published technical dictionaries as well as the everyday interactions of the non-technical user. The latest version of NetGlos is the Russian one and it should be available in a couple of weeks or so. It will no doubt be an excellent example of the ongoing, dynamic process of ‘russification’ of web terminology.”

“The seeds of cooperation across the internet have certainly already been sown. Our NetGlos Project has depended on the goodwill of volunteer translators from Canada, the U.S., Austria, Norway, Belgium, Israel, Portugal, Russia, Greece, Brazil, New Zealand and other countries. I think the hundreds of visitors we get coming to the NetGlos pages every day is an excellent testimony to the success of these types of working relationships. I see the future depending even more on cooperative relationships — although not necessarily on a volunteer basis. As a company that derives its very existence from the importance attached to languages, I believe the future will be an exciting and challenging one. But it will be impossible to be complacent about our successes and accomplishments. Technology is already changing at a frenetic pace. Lifelong learning is a strategy that we all must use if we are to stay ahead and be competitive. This is a difficult enough task in an English-speaking environment. If we add in the complexities of interacting in a multilingual/multicultural cyberspace, then the task becomes even more demanding. As well as competition, there is also the necessity for cooperation — perhaps more so than ever before. ”

Major international governmental organizations created free online versions of their terminology databases in 1997-98, for example (a) ILOTERM, a quadrilingual (English, French, German, Spanish) terminology database maintained by the International Labour Organization (ILO), (b) TERMITE (Telecommunication Terminology Database), a quadrilingual (English, French, Spanish, Russian) terminology database maintained by the International Telecommunication Union (ITU), and (c) WHOTERM (WHO Terminology Information System), a trilingual (English, French, Spanish) terminology database maintained by the World Health Organization (WHO).

Eurodicautom, the terminology database of the European Commission, was a multilingual database of economic, scientific, technical and legal terms and expressions, with language pairs for the 11 official languages of the European Union (Danish, Dutch, English, Finnish, French, German, Greek, Italian, Portuguese, Spanish, Swedish) and Latin. Initially developed to assist in-house translators, Eurodicautom launched a free online version in 1997 for European Union officials in the 15 member countries, and for language professionals around the world. Eurodicautom had an average of 120,000 visitors per day in late 2003, when it announced a temporary hiatus (that lasted three years) to prepare for a larger terminology database in more languages with the enlargement of the European Union to 25 country members in May 2004 (and 27 country members in January 2007).

The project of a larger terminology database was studied as early as 1999 to merge the databases maintained by several European organizations, with the help of a number of partners (European Commission, European Parliament, Council of the European Union, Court of Justice, European Court of Auditors, European Economic and Social Committee, Committee of the Regions, European Investment Bank, European Central Bank, Translation Centre for the Bodies of the European Union).

The new terminology database, named IATE (InterActive Terminology for Europe), was available on the intranet of some European institutions in spring 2004. IATE was launched as a free public service on the internet in March 2007, with 1.4 million entries in the 23 official languages of the European Union (Bulgarian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovene, Spanish, Swedish) and Latin. IATE offered 8.4 million words, 540,000 abbreviations and 130,000 phrases in 23 languages in 2010. lATE’s website is maintained by the Translation Centre of the European Union Institutions in Luxembourg.


About machine translation

According to the website of the European Association for Machine Translation (EAMT) in 1998: “Machine translation (MT) is the application of computers to the task of translating texts from one natural language to another. One of the very earliest pursuits in computer science, MT has proved to be an elusive goal, but today a number of systems are available which produce output which, if not perfect, is of sufficient quality to be useful for certain specific applications, usually in the domain of technical documentation. In addition, translation software packages which are designed primarily to assist the human translator in the production of translations are enjoying increasing popularity within professional translation organizations.”

As explained by Globalink, a company specialized in language translation software and services, on its website: “The computer uses three sets of data: the input text, the translation program and permanent knowledge sources (containing a dictionary of words and phrases of the source language), and information about the concepts evoked by the dictionary and rules for sentence development. These rules are in the form of linguistic rules for syntax and grammar, and some are algorithms governing verb conjugation, syntax adjustment, gender and number agreement and word re-ordering. Once the user has selected the text and set the machine translation process in motion, the program begins to match words of the input text with those stored in its dictionary. Once a match is found, the application brings up a complete record that includes information on possible meanings of the word and its contextual relationship to other words that occur in the same sentence. The time required for the translation depends on the length of the text. A three-page, 750-word document takes about three minutes to render a first draft translation.”

The website of Globalink also provided a history of machine translation: “From the very beginning, machine translation (MT) and natural language processing (NLP) have gone hand-in-hand with the evolution of modern computational technology. The development of the first general-purpose programmable computers during World War II was driven and accelerated by Allied cryptographic efforts to crack the German Enigma machine and other wartime codes. Following the war, the translation and analysis of natural language text provided a testbed for the newly emerging field of Information Theory. During the 1950s, research on Automatic Translation (known today as Machine Translation, or ‘MT’) took form in the sense of literal translation, more commonly known as word-for-word translations, without the use of any linguistic rules.”

“The Russian project initiated at Georgetown University in the early 1950s represented the first systematic attempt to create a demonstrable machine translation system. Throughout the decade and into the 1960s, a number of similar university and government-funded research efforts took place in the United States and Europe. At the same time, rapid developments in the field of Theoretical Linguistics, culminating in the publication of Noam Chomsky’s ‘Aspects of the Theory of Syntax’ (1965), revolutionized the framework for the discussion and understanding of the phonology, morphology, syntax and semantics of human language. In 1966, the U.S. government-issued ALPAC [Automatic Language Processing Advisory Committee] report offered a prematurely negative assessment of the value and prospects of practical machine translation systems, effectively putting an end to funding and experimentation in the field for the next decade.”

“It was not until the late 1970s, with the growth of computing and language technology, that serious efforts began once again. This period of renewed interest also saw the development of the Transfer model of machine translation and the emergence of the first commercial MT systems. While commercial ventures such as Systran and Metal began to demonstrate the viability, utility and demand for machine translation, these mainframe-bound systems also illustrated many of the problems in bringing MT products and services to market. High development cost, labor-intensive lexicography and linguistic implementation, slow progress in developing new language pairs, inaccessibility to the average user, and inability to scale easily to new platforms are all characteristics of these second-generation systems.”

In “Web embraces language translation”, an article published by ZDNN (ZD Network News) on 21 July 1998, journalist Martha L. Stone explained: “While these so-called ‘machine’ translations are gaining worldwide popularity, company execs admit they’re not for every situation. Representatives from Globalink, Alis and Systran use such phrases as ‘not perfect’ and ‘approximate’ when describing the quality of translations, with the caveat that sentences submitted for translation should be simple, grammatically accurate and idiom-free. ‘The progress on machine translation is moving at Moore’s Law — every 18 months it’s twice as good,’ said Vin Crosbie, a web industry analyst in Greenwich, Conn. ‘It’s not perfect, but some people don’t realize I’m using translation software.’ With these translations, syntax and word usage suffer, because dictionary-driven databases can’t decipher between homonyms — for example, ‘light’ (as in the sun or light bulb) and ‘light’ (the opposite of heavy). Still, human translation would cost between $50 and $60 per web page, or about 20 cents per word, Systran’s Sabatakakis said. While this may be appropriate for static ‘corporate information’ pages, the machine translations are free on the web, and often less than $100 for software, depending on the number of translated languages and special features.”

IBM released a high-end professional product, the WebSphere Translation Server, in March 2001. The software could instantly translate web pages, emails and chats from/into eight languages (Chinese, English, French, German, Italian, Japanese, Korean, Spanish), with 500 words processed per second. Machine translation programs were also created by Systran, Alis Technologies, Lernout & Hauspie (who bought Globalink) and Softissimo.

Another experiment was led by the Pan American Health Organization (PAHO), based in Washington D.C. The PAHO Translation Unit was among the first ones to use two in-house machine translation systems — SPANAM (Spanish to English) developed since 1980, and ENGSPAN (English to Spanish) developed since 1985. In-house and free-lance translators could post-edit the raw output to produce translations with a 30-50 percent gain in productivity. SPANAM and ENGSPAN were available on PAHO’s local area network for the technical and administrative staff, and for PAHO field offices. SPANAM and ENGSPAN were also licensed to public and non-profit institutions in the United States, Latin America and Spain. The software was later renamed PAHOMTS, and new versions were added: English-Portuguese (July 2003), Portuguese-English (July 2003), Spanish-Portuguese (March 2004), and Portuguese-Spanish (June 2005).


About computer-assisted translation

A different project was led by the Computer-assisted Translation and Terminology (CTT) Unit of the World Health Organization (WHO). While maintaining its trilingual (English, French, Spanish) terminology database WHOTERM (WHO Terminology Information System), the CTT Unit was assessing technical options for using computer-assisted translation (CAT) systems based on translation memory.

While machine translation is the automated process of translating from one language to another language, with no human intervention during the translation process, computer-assisted translation (CAT) involves some interaction between the translator and the software during the translation process. CAT software is based on translation memory, with terminology processing in real time, i.e. the instant access to previous translations for portions of text, that can be accepted, rejected or modified, with the new translation added to the memory.

Wordfast, a popular CAT tool created in 1999 by Yves Champollion, was compatible with other main MT or CAT software such as the IBM Websphere Translation Server and SDL Trados. Available for any platform (Windows, Mac, Linux, etc.), Wordfast had 14,000 customers worldwide in 2010, including the United Nations, NASA (National Aeronautics and Space Administration), Sony, Coca-Cola and McGraw-Hill.

According to Tim McKenna, a mathematics teacher and writer, interviewed in October 2000: “When software gets good enough for people to chat or talk on the web in real time in different languages, then we will see a whole new world appear before us. Scientists, political activists, businesses and many more groups will be able to communicate immediately without having to go through mediators or translators.”

Randy Hobler, a marketing consultant for translation software and services, described the next steps in September 1998 in an email interview: “We are rapidly reaching the point where highly accurate machine translation of text and speech will be so common as to be embedded in computer platforms, and even in chips in various ways. At that point, and as the growth of the web slows, the accuracy of language translation hits 98%, and the saturation of language pairs has covered the vast majority of the market, language transparency (any-language-to-any-language communication) will be too limiting a vision for those selling this technology.”

“The next development will be ‘transcultural, transnational transparency’ , in which other aspects of human communication, commerce and transactions beyond language alone will come into play. For example, gesture has meaning, facial movement has meaning and this varies among societies. The thumb-index finger circle means ‘OK’ in the United States. In Argentina, it is an obscene gesture. When the inevitable growth of multimedia, multilingual videoconferencing comes about, it will be necessary to ‘visually edit’ gestures on the fly. The MIT Media Lab, Microsoft and many others are working on computer recognition of facial expressions, biometric access identification via the face, etc. It won’t be any good for a U.S. business person to be making a great point in a web-based multilingual video conference to an Argentinian, having his words translated into perfect Argentinian Spanish if he makes the ‘O’ gesture at the same time. Computers can intercept this kind of thing and edit them on the fly.”

“There are thousands of ways in which cultures and countries differ, and most of these are computerizable to change as one goes from one culture to the other. They include laws, customs, business practices, ethics, currency conversions, clothing size differences, metric versus English system differences, etc. Enterprising companies will be capturing and programming these differences and selling products and services to help the peoples of the world communicate better. Once this kind of thing is widespread, it will truly contribute to international understanding.”


About free machine translation services

AltaVista launched AltaVista Translation — better known as Babel Fish — in December 1997 as the first free machine translation service from English to five other languages (French, German, Italian, Portuguese, Spanish), and vice versa. The original web page and the instant translation were face-to-face on the screen. Translating any short text was also possible by copying and pasting it into the interface. The result was far from perfect, but helpful and free. Non-English-speaking users were thrilled. Babel Fish — backed up by plurilingual dictionaries with 12.5 million entries — contributed to a plurilingual web.

Babel Fish was developed by Systran (System Translation), a company specializing in automated language solutions. As explained on Systran’s website: “Machine Translation (MT) software translates one natural language into another natural language. MT takes into account the grammatical structure of each language and uses rules to transfer the grammatical structure of the source language (text to be translated) into the target language (translated text). MT cannot replace a human translator, nor is it intended to.”

In “Web embraces language translation”, an article published by ZDNN (ZD Network News) on 21 July 1998, journalist Martha L. Stone explained: “Systran has partnered with AltaVista and reports between 500,000 and 600,000 visitors a day on babelfish.altavista.digital.com, and about 1 million translations per day — ranging from recipes to complete web pages. About 15,000 sites link to Babel Fish, which can translate to and from French, Italian, German, Spanish and Portuguese. The site plans to add Japanese soon. ‘The popularity is simple. With the internet, now there is a way to use U.S. content. All of these contribute to this increasing demand,’ said Dimitros Sabatakakis, group CEO of Systran, speaking from his Paris home.” Babel Fish moved on Yahoo’s website in May 2008, and was replaced by Microsoft’s Bing Translator in May 2012.

Google Translate is a free online language translation service launched in October 2007, that instantly translates a section of text, document or web page into another language. Users paste a text in the web interface or supply a hyperlink. The machine translation is produced by statistical analysis instead of the traditional rule-based analysis. As stated on its website: “Google Translate can help users understand the general content of a foreign language text, but doesn’t deliver accurate translations.”

Prior to this date, Google used a Systran-based translation service, with several stages in languages pairs detailed on Wikipedia: (1) English to French, German, and Spanish, and vice versa; (2) English to Portuguese and Dutch, and vice versa; (3) English to Italian, and vice versa; (4) English to simplified Chinese, Japanese and Korean, and vice versa; (5) English to Arabic, and vice versa (in April 2006); (6) English to Russian, and vice versa (in December 2006); (7) English to traditional Chinese, and simplified Chinese to traditional Chinese, and vice versa (in February 2007).

Google Translate’s developments in 2007-10 were: (8) all language pairs previously available, in any language combination (in October 2007); (9) English to Hindi, and vice versa; (10) Bulgarian, Croatian, Czech, Danish, Finnish, Greek, Norwegian, Polish, Romanian, Swedish, with any combination (in May 2008); (11) Catalan, Filipino, Hebrew, Indonesian, Latvian, Lithuanian, Serbian, Slovak, Slovene, Ukrainian, Vietnamese (in September 2008); (12) Albanian, Estonian, Galician, Hungarian, Maltese, Thai, Turkish (in January 2009); (13) Persian (in June 2009); (14) Afrikaans, Belarusian, Icelandic, Irish, Macedonian, Malay, Swahili, Welsh, Yiddish (in August 2009); (15) Haitian Creole (in January 2010); (16) Armenian, Azeri, Basque, Georgian, Urdu (in May 2010); (17) Latin (in October 2010).

Google Translate offered a speech program to read the translated text in 2009, and different translations for the same word in 2011. The Google Translator Toolkit was launched in June 2009 as a free web service for (human) translators to edit the translations generated by Google Translate, with English as a source language and 47 target languages. Translators could also share translations, and build up glossaries and translation memories.


About the catalogue of all living languages

The “Ethnologue: Languages of the World” was a reference catalogue published in print by SIL International every four years since the 1950s. The Ethnologue launched its free online version in 1996, with a full description of the 6,700 living languages spoken in 228 countries.

As explained in January 2000 by Barbara Grimes, editor of the Ethnologue since 1971, in an email interview: “The Ethnologue is a catalogue of the languages of the world, with information about where they are spoken, an estimate of the number of speakers, what language family they are in, alternate names, names of dialects, other sociolinguistic and demographic information, dates of published Bibles, a name index [the Ethnologue Name Index], a language family index [the Ethnologue Language Index], and language maps. (…) Multilingual web pages are more widely useful, but much more costly to maintain. We have had requests for the Ethnologue in a few other languages, but we do not have the personnel or funds to do the translation or maintenance, since it is constantly being updated.”

What exactly is a language? According to the website of the Ethnologue: “How one chooses to define a language depends on the purposes one has in identifying one language as being distinct from another. Some base their definition on purely linguistic grounds, focusing on lexical and grammatical differences. Others may see social, cultural, or political factors as being primary. In addition, speakers themselves often have their own perspectives on what makes a particular language uniquely theirs. Those are frequently related to issues of heritage and identity much more than to the actual linguistic features. In addition, it is important to recognize that not all languages are oral. Sign languages constitute an important class of linguistic varieties that merit consideration.”

The Ethnologue was founded in 1951 by Richard Pittman as a catalogue of minority languages, to share information on language development needs with his colleagues at SIL International (formerly known as the Summer Institute of Linguistics) and with other language researchers worldwide. Information was expanded from minority languages to include all known languages of the world in 1971, with the help of thousands of linguists from partner organizations. Barbara Grimes completed in 1967-73 an in-depth revision of the information available for Africa, the Americas, the Pacific and a few countries in Asia. The number of identified languages grew from 4,493 to 6,809, with more information recorded on each language in the computer database created in 1971.

A new edition of the Ethnologue was published approximately every four years until 2012. The Ethnologue has been published every year since 2013, with the online version available before the printed version, to keep up with the fast pace of the internet. The Ethnologue identified 6,700 living languages in 1996, 6,909 living languages in 2009, 7,102 living languages in 2015, and 7,099 living languages in 2017. A paid subscription model for extensive users in high income countries was created in 2016 in order to sustain the Ethnologue project.

At the invitation of the International Organization for Standardization (ISO) in 2002, SIL International worked on the new standard ISO 639-3 (2007), that reconciled the complete set of three-letter identifiers used in the Ethnologue since the inception of its database in 1971 with the 400 three-letter codes used in the previous standard ISO 639-2 (1998), as well as other three-letter codes developed by the Linguist List for ancient and constructed languages. (The first standard to identify languages was ISO 639-1 (1988) as a set of two-letter language codes.)

Approved in 2006 and published in 2007, ISO 639-3 (2007) provides three-letter codes for identifying 7,589 languages (living and extinct, ancient and reconstructed, major and minor, written and unwritten), including sign languages. SIL International was named the registration authority for the inventory of language identifiers, and administers the annual cycle of changes and updates.


About minority languages

Guy Antoine, a Haitian-American software developer born in Haiti and living in New York, founded the website Windows on Haiti in April 1998 on his free time to promote Haitian Creole, a French-based creole language spoken not only in Haiti but also in the Dominican Republic, the United States, Canada, and other countries.

Guy Antoine wrote in June 2001 in an email interview: “Who are the Haitian people without Kreyól [Haitian Creole], the language that has evolved and bound various African tribes transplanted in Haiti during the slavery period? It is the most palpable exponent of commonality that defines us as a people. However, it is primarily a spoken language, not a widely written one. I see the web changing this situation more than any traditional means of language dissemination. Our site aims to be a major source of information about Haitian culture, and a tool to counter the persistently negative images of Haiti from the traditional media. The scope of this effort extends beyond mere commentary to the diversity of arts and history, cuisine and music, literature and reminiscences of traditional Haitian life. In short, the site opens some new windows to the culture of Haiti.”

“The primary language of Windows on Haiti is English, but one will equally find a center of lively discussion conducted in Kreyól. In addition, one will find documents related to Haiti in French, in the old colonial Creole, and I am open to publishing others in Spanish and other languages. I do not offer any sort of translation, but multilingualism is alive and well at the site, and I predict that this will increasingly become the norm throughout the web. Kreyól is the only national language of Haiti, and one of its two official languages, the other being French. It is hardly a minority language in the Caribbean context, since it is spoken by eight to ten million people. I have created two discussion forums on my website, held exclusively in Kreyól. One is for general discussions just about everything but obviously more focused on Haiti’s current socio-political problems. The other is reserved only to debates of writing standards for Kreyol. Those debates have been quite spirited with the participation of a number of linguistic experts. The uniqueness of these forums is their non-academic nature.”

According to some linguists, a language dies every 14 days. A good way to counter it is to bring together language communities via the internet, to help revitalize these languages through digital technology, and to strengthen the presence of language communities in social media.

Kevin Scannell, a computer scientist and professor at Saint Louis University in Missouri, United States, created Indigenous Tweets on his free time to identify tweets in indigenous and minority languages. He designed An Crúbadán, a statistical software crawling the web to find Twitter threads. Indigenous Tweets identified 35 languages in March 2011, 71 languages in April 2011, 144 languages in March 2013, and 184 languages in October 2017.

Indigenous Tweets’ home page lists all languages identified as being active on Twitter. People click on the corresponding row of the language they are interested in, and are redirected to a new page that lists users in that language (with a maximum of 500 users) and statistics for each user: number of tweets, number of followers, percentage of tweets in the given language (some users tweet in both a global language and a minority language), and date of the latest tweet. The main minority languages were Haitian Creole (with users from the Caribbean, North America and other places), Basque and Welsh. People can also get in touch directly via Twitter. A number of joint projects started this way.

As explained by Kevin Scannell on his blog: “Speakers of indigenous and minority languages around the world are struggling to keep their languages and cultures alive. More and more language groups are turning to the web as a tool for language revitalization, and as a result there are now thousands of people blogging and using social media sites like Facebook and Twitter in their native language. These sites have allowed sometimes scattered communities to connect and use their languages online in a natural way. Social media have also been important in engaging young people, who are the most important demographic in language revitalization efforts. Together we’re breaking down the idea that only global languages like English and French have a place online!”

“The primary aim of Indigenous Tweets is to help build online language communities through Twitter. We hope that the site makes it easier for speakers of indigenous and minority languages to find each other in the vast sea of English, French, Spanish, and other global languages that dominate Twitter. Even speakers of languages like Basque and Welsh with vibrant online communities have been surprised to find just how many people there are tweeting in their language. This is the other goal of Indigenous Tweets: it’s a message to the world that says ‘We are here and we’re proud of our languages’. For languages with just a few users, I hope it inspires some people to start — make your voice heard!”

Kevin Scannell created a second website, Indigenous Blogs, in September 2011 to identify blogs written in indigenous and minority languages, and to offer a similar platform for people to get in touch. He began with blogs hosted by Blogspot (which also hosts his own blog), WordPress and Tumblr. Indigenous Blogs identified blogs in 50 languages in September 2011, blogs in 74 languages in March 2013, and blogs in 85 languages in October 2017.


About endangered languages

UNESCO (United Nations Educational, Scientific and Cultural Organization) launched its free trilingual online “Atlas of the World’s Languages in Danger” as a complement to the printed trilingual (English, French, Spanish) edition (3rd edition, 2010) edited by linguist Christopher Moseley. Previous editions of the atlas in 1996 and 2001 only existed in print.

The online atlas included 2,473 languages in 2010, and 2,464 languages in 2017. It can be searched by country and area, language name, number of speakers from/to, language vitality, and ISO 639-3 code. The names of the 2,464 languages are transcribed into English, French and Spanish, and their alternate names (spelling variants, dialects and names in non-Roman scripts) are also provided.

UNESCO experts have established six degrees (safe, vulnerable, definitely endangered, severely endangered, critically endangered, extinct) to define the vitality or endangerment of a language. (1) “Safe” — not included in the atlas — means that the language is spoken by all generations and that intergenerational transmission is uninterrupted. (2) “Vulnerable” means that most children speak the language, but it may be restricted to certain places, for example at home. (3) “Definitely endangered” means that children no longer learn the language as a mother tongue at home. (4) “Severely endangered” means that the language is spoken by grandparents and older generations; the parent generation may understand it, but doesn’t use it with their children or among themselves. (5) “Critically endangered” means that the youngest speakers are grandparents and older, who speak the language partially and infrequently. (6) “Extinct” means that there are no speakers left; the atlas includes languages that are presumably extinct since the 1950s.

When exactly is a language considered endangered? As explained on the website of the atlas: “A language is endangered when its speakers cease to use it, use it in fewer and fewer domains, use fewer of its registers and speaking styles, and/or stop passing it on to the next generation. No single factor determines whether a language is endangered.” UNESCO experts have identified nine factors to be considered: (1) intergenerational language transmission; (2) absolute number of speakers; (3) proportion of speakers within the total population; (4) shifts in domains of language use; (5) response to new domains and media; (6) availability of materials for language education and literacy; (7) governmental and institutional language attitudes and policies including official status and use; (8) attitudes of community members towards their own language; (9) amount and quality of documentation.

When and why do languages disappear? “A language disappears when its speakers disappear or when they shift to speaking another language — most often, a larger language used by a more powerful group. Languages are threatened by external forces such as military, economic, religious, cultural or educational subjugation, or by internal forces such as a community’s negative attitude towards its own language. Today, increased migration and rapid urbanization often bring along the loss of traditional ways of life and a strong pressure to speak a dominant language that is — or is perceived to be — necessary for full civic participation and economic advancement.”

The “Atlas of the World’s Languages in Danger” classifies Gaelic as a “definitely endangered” language. There were 59,000 Gaelic speakers (over 1 percent of the population), according to the 2011 census. These figures were much lower than the 200,000 Gaelic speakers (4.5 percent of the population) in the 1901 census.

This has not always been the case. For many centuries, everyone spoke Gaelic in Scotland and Ireland, and scholars disseminated their writings in Gaelic throughout Europe. Over the centuries, English gradually became the dominant language, including on the Scottish Western Isles, despite the presence of Scottish Gaelic as the first community language. The revival of Gaelic culture dates back to the early 19th century, in the form of poetry, prose and music. Between the two world wars, a radio channel began broadcasting the news in Gaelic, and Gaelic started being learned again in schools. Today, more novels are published in Gaelic that at any other time. Radio nan Gàidheal has broadcasted in Gaelic since the 1980s, and the TV channel ALBA offers shows in Gaelic since the early 2000s. Both have a web presence, which has boosted their audience.

A freelance translator from English to Scottish Gaelic, Michael Bauer has worked on several localization projects on his free time, “just for the love of it”, with a fellow localizer who on the web only goes by GunChleoc (“a woman” in Scottish Gaelic), a proof that few people can do a lot for their language community. The localization projects included the Gaelic versions of the web browser Opera (in 2001), Firefox (Mozilla web browser), Thunderbird (Mozilla messaging), Lightning (Mozilla calendar), Google Chrome, OpenOffice, LibreOffice, the VLC media player, the game Freeciv (the open source version of the game Civilization), and Accentuate.us (a software that automatically inserted accents). Michael Bauer also created the spell checker An Dearbhair Beag with Kevin Scannell. Since 2012, he has worked on a few paid projects with GunChleoc, for example the Gaelic language packs for Microsoft Windows and Microsoft Office.

Wikipedia has its Gaelic version, named Uicipeid. There are three major online dictionaries in Scottish Gaelic. The first dictionary is Stòr-dàta, an online dictionary that is mostly a word list managed by the college Sabhal Mòr Ostaig, on the Isle of Skye, a college where all the courses are taught in Scottish Gaelic. The second dictionary is the Dwelly, a Gaelic dictionary published in 1911, which is to Gaelic what the Oxford English Dictionary is to English. Its digital version is the result of a ten-year labour of love by Michael Bauer and his colleague Will Robertson. The third dictionary is Am Faclair Beag, which means “small dictionary” but is actually a large dictionary offering both the Dwelly and more modern data, also created and maintained by Michael Bauer and Will Robertson.

Michael Bauer wrote in October 2015 in an email interview: “There are, sadly, far too few users and there are some aspects which actually actively limit usage. For example, Gaelic schools cannot install Gaelic software because the IT contracts are given by the councils to outside IT companies who only provide English software and operating systems. Because they limit the admin rights of the users at schools, this means it is very difficult to install software which is not on their official ‘list’ and because Gaelic is not mentioned in the contract, they don’t put it there. Free and open software has helped carve out more of a space on the web for Gaelic, and cooperating with commercial long-term partners is helping to produce some very useful enabling technologies such as the predictive texting tool Adaptxt or the upcoming text-to-speech tool with Cereproc.”

“A central storage space for translations would be useful for localization projects, with a shared translation memory, thus avoiding to endlessly retranslate the same terms, phrases and sentence segments. If the translations could be available from the same site, like a meta-Pootle [a community localization server], everyone working for the revival of a minority language on the web would benefit from it. There actually was/is something a bit like that, Ubuntu’s Launchpad, but unfortunately there is not enough coordination between Launchpad and the projects and much effort is going to waste by people working on Launchpad and the translations not going anywhere. There is also AmaGama these days which is something like that but not commonly used apart from some like Mozilla and LibreOffice (I think). Part of the problem is there are so many platforms these days, all trying to carve out a niche… some of them commercial, like Transifex or Crowdin.”


Some questions

< What is the best way to help language revitalization efforts? As an example, to celebrate International Dictionary Day on 16 October 2015, 15 new and revised South-African indigenous language dictionaries were published online during the following weeks. Some minority and indigenous languages still need language dictionaries, grammars and glossaries. Other minority and indigenous languages even need basic language technologies such as keyboard settings or spell checkers.

< How to transcend the language barrier? How do internet users access information in a language they don't know, when they really need to access its content, and not only the gist of it? Do they rely on Google Translate? Do they hire translators? Do they learn the language, like Alexander Pushkin who learned Spanish to be able to read the original version of Cervantes’ famed novel “Don Quixote”? Many people are unilingual. Even bilingual, trilingual or plurilingual users can't know all languages.

< Does the web still need English as its lingua franca? More than half of the web is still in English in 2019, but many people don't read English, or just get the gist of what they read. Will we see the end of English as a lingua franca to communicate across languages? Do we still need a lingua franca when we have instant translation for text and voice? What do non-native English speakers think about the need or not of English as a lingua franca?

< How about (human) translators? There would be no multilingual web without translators. Their working conditions have become a major issue, with few people being aware of it in society at large, and their names are often forgotten on the web pages they spent hours or weeks to translate. Most translators now work remotely and have become “invisible”, with precarious employment, lower rates and the rise of unpaid volunteer translation (including crowdsourced translation) promoted by major organizations that have the necessary funds to hire many professionals, but no professional translators. There is a lot to do to acknowledge again the key role of translators in society.


Timeline

[year-month]
1963: ASCII (American Standard Code for Information Interchange) is the first encoding system for English (and Latin).
1971-07: Michael Hart sends a link to eText #1 to the 100 users of the pre-internet. Project Gutenberg is born.
1974: Vinton Cerf and Robert Kahn invent the communication protocols at the heart of the internet.
1977: The International Federation of Library Associations (IFLA) creates UNIMARC as a common bibliographic format for library catalogues.
1983: After being a network linking U.S. governmental agencies, universities and research centers, the internet starts its progression worldwide.
1984: Copyleft, invented by Richard Stallman, ensures that computer software can be freely used and improved by using a GPL (General Public License).
1990: Tim Berners-Lee invents the World Wide Web and gives his invention to the world.
1991-01: The Unicode Consortium is founded to develop Unicode, a universal encoding system for the processing, storage and interchange of text data in any language.
1992: The Internet Society (ISOC) is founded by Vinton Cerf to promote the development of the internet.
1992: Paul Southworth creates the Etext Archives as a home for electronic texts of any kind.
1992: Projekt Runeberg is the first Swedish online collection of public domain books.
1993: John Mark Ockerbloom creates The Online Books Page to facilitate access to books that are available for free on the internet.
1993-04: ABU-La Bibliothèque Universelle (ABU-The Universal Library) is the first French online collection of public domain books.
1993-06: Adobe launches PDF (Portable Document Format), Acrobat Reader and Adobe Acrobat.
1993-07: John Labovitz creates The E-Zine-List as a list of electronic zines around the world.
1993-11: Mosaic is the first web browser for the general public.
1994: Netscape Navigator is the second web browser for the general public.
1994: Projekt Gutenberg-DE is the first German online collection of public domain books.
1994: Michel Martin creates Travlang as a list of free online translation dictionaries for travelers.
1994: Pierre Perroud creates Athena, the first Swiss online collection of public domain books.
1994-05: Tyler Chambers creates The Human-Languages Page (H-LP) as a catalogue of online linguistic resources.
1994-07: Internet users who are non-native English speakers reach 5 percent.
1994-10: The World Wide Consortium (W3C) is founded to develop protocols for the web.
1995: The Worldwide Language Institute (WWLI) launches NetGlos, a multilingual online glossary of internet terminology.
1995: Microsoft launches its web browser Internet Explorer.
1995: Robert Beard creates A Web of Online Dictionaries as a directory of free online dictionaries.
1995: Tyler Chambers creates the Internet Dictionary Project (IDP) as a collaborative project to create free online bilingual dictionaries.
1995-07: Jeff Bezos launches the online bookstore Amazon.com.
1996: The Ethnologue, a reference catalogue for all living languages, launches its free online version.
1996-04: The Internet Archive is founded by Brewster Kahle to archive the web for present and future generations.
1996-04: Robert Ware creates OneLook Dictionaries as a “fast finder” in hundreds of free online dictionaries.
1996-12: Member states of the World Intellectual Property Organization (WIPO) adopt the WIPO Copyright Treaty (WCT).
1997: The Dictionnaire Universel Francophone (French-language Universal Dictionary) from Hachette is freely available online.
1997: The French National Library launches its digital library Gallica.
1997-01: The International Labour Office (ILO) organizes its first symposium on multimedia convergence.
1997-01: Several European national libraries create a common website named Gabriel.
1997-04: There are one million websites worldwide.
1997-05: The British Library launches its first OPAC (Online Public Access Catalog).
1997-08: O’Reilly Japan publishes “For a Multilingual Web” by Yoshi Mikami in Japanese, and translates it into English, German and French the following year.
1997-09: The Internet Bookshop (United Kingdom) starts selling books published in the United States.
1997-12: The Italian translation company Logos puts all its linguistic resources (dictionaries, glossaries, grammars, conjugators) online for free.
1997-12: The search engine AltaVista launches Babel Fish, its free instant online translation service.
1997-12: There are 70 million internet users (1.7 percent of the world population).
1998-07: Internet users who are non-native English speakers reach 25 percent.
1998-10: Amazon creates its first two subsidiaries in the United Kingdom and Germany.
1998-10: The Digital Millennium Copyright Act (DMCA) extends copyright to 70 years after the author’s death.
1999: Michael Kellogg creates WordReference.com to offer free online bilingual dictionaries and discussion forums for linguists.
1999-09: The Open eBook (OeB) is published as a standard format for e-books.
1999-12: Britannica.com is the online version of Encyclopaedia Britannica, available for free and then for a fee.
1999-12: The French Encyclopaedia Universalis creates its online version, available with a paid subscription and some free articles.
2000-01: The wiki becomes popular as a collaborative website.
2000-01: The Million Book Project wants to offer one million free e-books in several languages.
2000-02: Robert Beard co-founds yourDictionary.com as a web portal for dictionaries and other linguistic resources.
2000-03: The Oxford English Dictionary (OED) is available online, with a paid subscription.
2000-03: There are 300 million internet users (5 percent of the world population).
2000-07: Internet users who are non-native English speakers reach 50 percent.
2000-08: Amazon launches its French subsidiary Amazon.fr.
2000-09: Quebec’s GDT (Grand Dictionnaire Terminologique – Large Terminology Dictionary) is a major bilingual French-English online dictionary available for free.
2000-12: IBM’s USB flash drive has a storage capacity of 8 MB, more than five times the capacity of the floppy disk.
2001: Lawrence “Larry” Lessig conceives Creative Commons for authors to be able to share their work on the internet, and for creators to be able to copy and remix it.
2001-01: Jimmy Wales and Larry Sanger create Wikipedia as a free collaborative online encyclopedia.
2001-03: IBM launches the WebSphere Translation Server to handle machine translation on a large scale in eight languages.
2001-04: There are 17 million PDAs and 100,000 e-readers worldwide, according to a Seybold Report.
2001-05: The European Union Copyright Directive (EUCD) extends copyright from 50 to 70 years after the author’s death.
2001-10: The Internet Archive launches the Wayback Machine to crawl the 30 billion web pages archived since 1996.
2002-02: The Budapest Open Access Initiative (BOAI) is signed as the founding text of the open access movement for free access to research literature.
2002-03: The Oxford Reference Online (ORO) is launched as a major online encyclopedia conceived for the web, with a paid subscription.
2002-12: Creative Commons publishes its first licenses.
2003-09: The Massachusetts Institute of Technology (MIT) creates its OpenCourseWare (OCW) to offer all its course materials for free on the web.
2003-10: The Public Library of Science (PLOS) launches its first scientific and medical journals, with all its articles under a Creative Commons license.
2003-12: One million works on the internet use a Creative Commons license.
2004: Tim O’Reilly, founder of O’Reilly Media, coins the term “web 2.0” as the title of a conference he is organizing.
2004-01: The European Library replaces Gabriel as the web portal for European national libraries.
2004-02: Mark Zuckerberg creates Facebook for his fellow students before extending it to the world.
2004-03: The Research Libraries Group (RLG) launches RedLightGreen as the first free online multilingual union catalogue for libraries.
2004-05: The European Union has 20 official languages (instead of 11 languages and Latin) after its enlargement.
2004-10: Google launches Google Print, that will become Google Books.
2005-04: The International Digital Publishing Forum (IDPF) replaces the Open eBook Forum.
2005-10: The Internet Archive launches the Open Content Alliance (OCA) to offer a worldwide public digital library in a number of languages.
2005-12: The Massachusetts Institute of Technology (MIT) launches its OpenCourseWare Consortium for other universities to offer their course materials online for free.
2005-12: There are one billion internet users (15.7 percent of the world population).
2006: There are 90 million smartphones and one billion mobile phones worldwide.
2006-06: Twitter is launched as a micro-blogging tool with 140-character messages.
2006-08: Google replaces Google Print with Google Books.
2006-08: OCLC’s union catalogue WorldCat launches its free online version.
2006-10: Microsoft creates Live Search Books before giving its collection of books to the Internet Archive two years later.
2006-11: There are 100 million websites.
2006-12: Gallica, the digital library of the French National Library, offers 90,000 books and 80,000 images.
2007-01: The European Union has 23 official languages instead of 20, with Bulgarian, Irish and Romanian.
2007-02: Creative Commons publishes the versions 3.0 of its licenses, with an international license and compatibility with other licenses (copyleft, GPL and others).
2007-03: Larry Sanger creates Citizendium as a free collaborative online encyclopedia led by experts.
2007-04: yourDictionary.com offers a directory of 2,500 dictionaries and grammars in 300 languages.
2007-06: The European Commission launches the free public version of its multilingual terminology database IATE (InterActive Terminology for Europe).
2007-09: The International Digital Publishing Forum (IDPF) publishes the first version of EPUB to replace the OeB (Open eBook) format.
2007-10: Google launches Google Translate as its free online translation service, after using Systran’s translation service for two years.
2007-12: Unicode (created in 1991) supersedes ASCII (created in 1963) as the main encoding system on the internet.
2008-07: PDF is released as an open standard and published by the International Organization for Standardization as ISO 32000-1:2008.
2008-11: Europeana is launched as the European public digital library.
2009-06: The Google Translator Toolkit is a free web service for (human) translators to edit machine translations produced by Google Translate.
2010-06: Facebook celebrates its 500 million users.
2010-12: 400 million works on the internet use a Creative Commons license.
2011-01: Wikipedia celebrates its tenth anniversary with 17 million articles in 270 languages.
2011-03: There are 2 billion internet users (30.2 percent of the world population).
2011-03: Kevin Scannell creates Indigenous Tweets to identify tweets in minority and indigenous languages.
2013: The Ethnologue, a reference catalogue for all living languages, publishes its free online version before its paid print version.
2013-11: Creative Commons publishes the versions 4.0 of its licenses.
2014-12: 882 million works on the internet use a Creative Commons license.
2015-01: There are 7,102 living languages, according to the Ethnologue.
2015-03: There are 3 billion internet users (42.3 percent of the world population).
2015-04: The Online Books Page gives access to two million free books on the internet.
2015-05: There are one billion websites.
2015-07: Internet users who are non-native English speakers reach 75 percent.


Copyright © 2015-19 Marie Lebert
License CC BY-NC-SA version 4.0

Written by marielebert

2015-11-18 at 21:49

Posted in Uncategorized