Mojibake


Mojibake is the garbled text that is the result of text being decoded using an unintended character encoding. The result is a systematic replacement of symbols with completely unrelated ones, often from a different writing system.
This display may include the generic replacement character in places where the binary representation is considered invalid. A replacement can also involve multiple consecutive symbols, as viewed in one encoding, when the same binary code constitutes one symbol in the other encoding. This is either because of differing constant length encoding, or the use of variable length encodings.
Failed rendering of glyphs due to either missing fonts or missing glyphs in a font is a different issue that is not to be confused with mojibake. Symptoms of this failed rendering include blocks with the code point displayed in hexadecimal or using the generic replacement character. Importantly, these replacements are valid and are the result of correct error handling by the software.

Etymology

Mojibake means "character transformation" in Japanese. The word is composed of 文字, "character" and 化け, "transform".

Causes

To correctly reproduce the original text that was encoded, the correspondence between the encoded data and the notion of its encoding must be preserved. As mojibake is the instance of non-compliance between these, it can be achieved by manipulating the data itself, or just relabeling it.
Mojibake is often seen with text data that have been tagged with a wrong encoding; it may not even be tagged at all, but moved between computers with different default encodings. A major source of trouble are communication protocols that rely on settings on each computer rather than sending or storing metadata together with the data.
The differing default settings between computers are in part due to differing deployments of Unicode among operating system families, and partly the legacy encodings' specializations for different writing systems of human languages. Whereas Linux distributions mostly switched to UTF-8 in 2004, Microsoft Windows still uses codepages for text files that differ between languages.
For some writing systems, an example being Japanese, several encodings have historically been employed, causing users to see mojibake relatively often. As a Japanese example, the word mojibake "文字化け" stored as EUC-JP might be incorrectly displayed as "ハクサ�ス、ア", "ハクサ嵂ス、ア", or "ハクサ郾ス、ア". The same text stored as UTF-8 is displayed as "譁�蟄怜喧縺�" if interpreted as Shift JIS. This is further exacerbated if other locales are involved: the same UTF-8 text appears as "æ–‡å—化け" in software that assumes text to be in the Windows-1252 or ISO-8859-1 encodings, usually labelled Western, or as "鏂囧瓧鍖栥亼" if interpreted as being in a GBK locale.

Underspecification

If the encoding is not specified, it is up to the software to decide it by other means. Depending on the type of software, the typical solution is either configuration or charset detection heuristics. Both are prone to mis-prediction in not-so-uncommon scenarios.
The encoding of text files is affected by locale setting, which depends on the user's language, brand of operating system and possibly other conditions. Therefore, the assumed encoding is systematically wrong for files that come from a computer with a different setting, or even from a differently localized software within the same system. For Unicode, one solution is to use a byte order mark, but for source code and other machine readable text, many parsers don't tolerate this. Another is storing the encoding as metadata in the file system. File systems that support extended file attributes can store this as user.charset. This also requires support in software that wants to take advantage of it, but does not disturb other software.
While a few encodings are easy to detect, in particular UTF-8, there are many that are hard to distinguish. A web browser may not be able to distinguish a page coded in EUC-JP and another in Shift-JIS if the coding scheme is not assigned explicitly using HTTP headers sent along with the documents, or using the HTML document's meta tags that are used to substitute for missing HTTP headers if the server cannot be configured to send the proper HTTP headers; see character encodings in HTML.

Mis-specification

Mojibake also occurs when the encoding is wrongly specified. This often happens between encodings that are similar. For example, the Eudora email client for Windows was known to send emails labelled as ISO-8859-1 that were in reality Windows-1252. The Mac OS version of Eudora did not exhibit this behaviour. Windows-1252 contains extra printable characters in the C1 range, that were not displayed properly in software complying with the ISO standard; this especially affected software running under other operating systems such as Unix.

Human ignorance

Of the encodings still in use, many are partially compatible with each other, with ASCII as the predominant common subset. This sets the stage for human ignorance:
When there are layers of protocols, each trying to specify the encoding based on different information, the least certain information may be misleading to the recipient.
For example, consider a web server serving a static HTML file over HTTP. The character set may be communicated to the client in any number of 3 ways:
Much older hardware is typically designed to support only one character set and the character set typically cannot be altered. The character table contained within the display firmware will be localized to have characters for the country the device is to be sold in, and typically the table differs from country to country. As such, these systems will potentially display mojibake when loading text generated on a system from a different country. Likewise, many early operating systems do not support multiple encoding formats and thus will end up displaying mojibake if made to display non-standard text- early versions of Microsoft Windows and Palm OS for example, are localized on a per-country basis and will only support encoding standards relevant to the country the localized version will be sold in, and will display mojibake if a file containing a text in a different encoding format from the version that the OS is designed to support is opened.

Resolutions

Applications using UTF-8 as a default encoding may achieve a greater degree of interoperability because of its widespread use and backward compatibility with US-ASCII. UTF-8 also has the ability to be directly recognised by a simple algorithm, so that well written software should be able to avoid mixing UTF-8 up with other encodings.
The difficulty of resolving an instance of mojibake varies depending on the application within which it occurs and the causes of it. Two of the most common applications in which mojibake may occur are web browsers and word processors. Modern browsers and word processors often support a wide array of character encodings. Browsers often allow a user to change their rendering engine's encoding setting on the fly, while word processors allow the user to select the appropriate encoding when opening a file. It may take some trial and error for users to find the correct encoding.
The problem gets more complicated when it occurs in an application that normally does not support a wide range of character encoding, such as in a non-Unicode computer game. In this case, the user must change the operating system's encoding settings to match that of the game. However, changing the system-wide encoding settings can also cause Mojibake in pre-existing applications. In Windows XP or later, a user also has the option to use Microsoft AppLocale, an application that allows the changing of per-application locale settings. Even so, changing the operating system encoding settings is not possible on earlier operating systems such as Windows 98; to resolve this issue on earlier operating systems, a user would have to use third party font rendering applications.

Problems in different writing systems

English

Mojibake in English texts generally occurs in punctuation, such as em dashes, en dashes, and curly quotes, but rarely in character text, since most encodings agree with ASCII on the encoding of the English alphabet. For example, the pound sign "£" will appear as "£" if it was encoded by the sender as UTF-8 but interpreted by the recipient as CP1252 or ISO 8859-1. If iterated using CP1252, this can lead to "£", "£", "£", etc.
Some computers did, in older eras, have vendor-specific encodings which caused mismatch also for English text.
Commodore brand 8-bit computers used PETSCII encoding, particularly notable for inverting the upper and lower case compared to standard ASCII. PETSCII printers worked fine on other computers of the era, but flipped the case of all letters. IBM mainframes use the EBCDIC encoding which does not match ASCII at all.

Other Western European languages

The alphabets of the North Germanic languages, Catalan, Finnish, German, French, Portuguese and Spanish are all extensions of the Latin alphabet. The additional characters are typically the ones that become corrupted, making texts only mildly unreadable with mojibake:
... and their uppercase counterparts, if applicable.
These are languages for which the ISO-8859-1 character set has been in use. However, ISO-8859-1 has been obsoleted by two competing standards, the backward compatible Windows-1252, and the slightly altered ISO-8859-15. Both add the Euro sign € and the French œ, but otherwise any confusion of these three character sets does not create mojibake in these languages. Furthermore, it is always safe to interpret ISO-8859-1 as Windows-1252, and fairly safe to interpret it as ISO-8859-15, in particular with respect to the Euro sign, which replaces the rarely used currency sign. However, with the advent of UTF-8, mojibake has become more common in certain scenarios, e.g. exchange of text files between UNIX and Windows computers, due to UTF-8's incompatibility with Latin-1 and Windows-1252. But UTF-8 has the ability to be directly recognised by a simple algorithm, so that well written software should be able to avoid mixing UTF-8 up with other encodings, so this was most common when many had software not supporting UTF-8. Most of these languages were supported by MS-DOS default CP437 and other machine default encodings, except ASCII, so problems when buying an operating system version were less common. Windows and MS-DOS are not compatible however.
In Swedish, Norwegian, Danish and German, vowels are rarely repeated, and it is usually obvious when one character gets corrupted, e.g. the second letter in "kÃ⁠¤rlek". This way, even though the reader has to guess between å, ä and ö, almost all texts remain legible. Finnish text, on the other hand, does feature repeating vowels in words like hääyö which can sometimes render text very hard to read. Icelandic and Faroese have ten and eight possibly confounding characters, respectively, which thus can make it more difficult to guess corrupted characters; Icelandic words like þjóðlöð become almost entirely unintelligible when rendered as "þjóðlöð".
In German, Buchstabensalat is a common term for this phenomenon, and in Spanish, deformación.
Some users transliterate their writing when using a computer, either by omitting the problematic diacritics, or by using digraph replacements. Thus, an author might write "ueber" instead of "über", which is standard practice in German when umlauts are not available. The latter practice seems to be better tolerated in the German language sphere than in the Nordic countries. For example, in Norwegian, digraphs are associated with archaic Danish, and may be used jokingly. However, digraphs are useful in communication with other parts of the world. As an example, the Norwegian football player Ole Gunnar Solskjær had his name spelled "SOLSKJAER" on his back when he played for Manchester United.
An artifact of UTF-8 misinterpreted as ISO-8859-1, "Ring meg nå", was seen in an SMS scam raging in Norway in June 2014.

Central and Eastern European

Users of Central and Eastern European languages can also be affected. Because most computers were not connected to any network during the mid- to late-1980s, there were different character encodings for every language with diacritical characters, often also varying by operating system.

Hungarian

is another affected language, which uses the 26 basic English characters, plus the accented forms á, é, í, ó, ú, ö, ü, plus the two characters ő and ű, which are not in Latin-1. These two characters can be correctly encoded in Latin-2, Windows-1250 and Unicode. Before Unicode became common in e-mail clients, e-mails containing Hungarian text often had the letters ő and ű corrupted, sometimes to the point of unrecognizability. It is common to respond to an e-mail rendered unreadable by character mangling with the phrase "Árvíztűrő tükörfúrógép", a nonsense phrase containing all accented characters used in Hungarian.
Examples

Polish

Prior to the creation of ISO 8859-2 in 1987, users of various computing platforms used their own character encodings such as AmigaPL on Amiga, Atari Club on Atari ST and Masovia, IBM CP852, Mazovia and Windows CP1250 on IBM PCs. Polish companies selling early DOS computers created their own mutually-incompatible ways to encode Polish characters and simply reprogrammed the EPROMs of the video cards to provide hardware code pages with the needed glyphs for Polish—arbitrarily located without reference to where other computer sellers had placed them.
The situation began to improve when, after pressure from academic and user groups, ISO 8859-2 succeeded as the "Internet standard" with limited support of the dominant vendors' software. With the numerous problems caused by the variety of encodings, even today some users tend to refer to Polish diacritical characters as krzaczki.

Russian and other Cyrillic alphabets

Mojibake may be colloquially called krakozyabry in Russian, which was and remains complicated by several systems for encoding Cyrillic. The Soviet Union and early Russian Federation developed KOI encodings. This began with Cyrillic-only 7-bit KOI7, based on ASCII but with Latin and some other characters replaced with Cyrillic letters. Then came 8-bit KOI8 encoding that is an ASCII extension which encodes Cyrillic letters only with high-bit set octets corresponding to 7-bit codes from KOI7. It is for this reason that KOI8 text, even Russian, remains partially readable after stripping the eighth bit, which was considered as a major advantage in the age of 8BITMIME-unaware email systems. For example, words "Школа русского языка" shkola russkogo yazyka, encoded in KOI8 and then passed through the high bit stripping process, end up rendered as "". In Serbian, it is called đubre, meaning "trash". Unlike the former USSR, South Slavs never used something like KOI8, and Code Page 1251 was the dominant Cyrillic encoding there before Unicode. Therefore, these languages experienced fewer encoding incompatibility troubles than Russian. In the 1980s, Bulgarian computers used their own MIK encoding, which is superficially similar to CP866.

Yugoslav languages

, Bosnian, Serbian and Slovenian add to the basic Latin alphabet the letters š, đ, č, ć, ž, and their capital counterparts Š, Đ, Č, Ć, Ž. All of these letters are defined in Latin-2 and Windows-1250, while only some exist in the usual OS-default Windows-1252, and are there because of some other languages.
Although Mojibake can occur with any of these characters, the letters that are not included in Windows-1252 are much more prone to errors. Thus, even nowadays, "šđčćž ŠĐČĆŽ" is often displayed as "šðèæž ŠÐÈÆŽ", although ð, è, æ, È, Æ are never used in Slavic languages.
When confined to basic ASCII, common replacements are: š→s, đ→dj, č→c, ć→c, ž→z. All of these replacements introduce ambiguities, so reconstructing the original from such a form is usually done manually if required.
The Windows-1252 encoding is important because the English versions of the Windows operating system are most widespread, not localized ones. The reasons for this include a relatively small and fragmented market, increasing the price of high quality localization, a high degree of software piracy, which discourages localization efforts, and people preferring English versions of Windows and other software.
The drive to differentiate Croatian from Serbian, Bosnian from Croatian and Serbian, and now even Montenegrin from the other three creates many problems. There are many different localizations, using different standards and of different quality. There are no common translations for the vast amount of computer terminology originating in English. In the end, people use adopted English words, and if they are unaccustomed to the translated terms may not understand what some option in a menu is supposed to do based on the translated phrase. Therefore, people who understand English, as well as those who are accustomed to English terminology regularly choose the original English versions of non-specialist software.
When Cyrillic script is used, the problem is similar to [|other Cyrillic-based scripts].
Newer versions of English Windows allow the code page to be changed, but this setting can be and often was incorrectly set. For example, Windows 98/Me can be set to most non-right-to-left single-byte code pages including 1250, but only at install time.

Caucasian languages

The writing systems of certain languages of the Caucasus region, including the scripts of Georgian and Armenian, may produce mojibake. This problem is particularly acute in the case of ArmSCII or ARMSCII, a set of obsolete character encodings for the Armenian alphabet which have been superseded by Unicode standards. ArmSCII is not widely used because of a lack of support in the computer industry. For example, Microsoft Windows does not support it.

Asian encodings

Another type of mojibake occurs when text is erroneously parsed in a multi-byte encoding, such as one of the encodings for East Asian languages. With this kind of mojibake more than one characters are corrupted at once, e.g. "k舐lek" in Swedish, where "är" is parsed as "舐". Compared to the above mojibake, this is harder to read, since letters unrelated to the problematic å, ä or ö are missing, and is especially problematic for short words starting with å, ä or ö such as "än". Since two letters are combined, the mojibake also seems more random. In some rare cases, an entire text string which happens to include a pattern of particular word lengths, such as the sentence "Bush hid the facts", may be misinterpreted.

Japanese

In Japanese, the phenomenon is, as mentioned, called mojibake. It is a particular problem in Japan due to the numerous different encodings that exist for Japanese text. Alongside Unicode encodings like UTF-8 and UTF-16, there are other standard encodings, such as Shift-JIS and EUC-JP. Mojibake, as well as being encountered by Japanese users, is also often encountered by non-Japanese when attempting to run software written for the Japanese market.

Chinese

In Chinese, the same phenomenon is called Luàn mǎ, and can occur when computerised text is encoded in one Chinese character encoding but is displayed using the wrong encoding. When this occurs, it is often possible to fix the issue by switching the character encoding without loss of data. The situation is complicated because of the existence of several Chinese character encoding systems in use, the most common ones being: Unicode, Big5, and Guobiao, and the possibility of Chinese characters being encoded using Japanese encoding.
It is easy to identify the original encoding when luanma occurs in Guobiao encodings:
Original encodingViewed asResultOriginal textNote
Big5GB三國志11威力加強版Lots of blank or undisplayable characters with occasional Chinese characters. The red characters are considered Private Use characters.
Shift-JISGB暥帤壔偗僥僗僩文字化けテストKana is displayed as characters with the radical 亻, while kanji are other characters. Most of them are extremely uncommon and not in practical use in modern Chinese.
EUC-KRGB叼力捞钙胶 抛农聪墨디제이맥스 테크니카Random common Simplified Chinese characters which in most cases make no sense. Easily identifiable because of spaces between every several characters.

An additional problem is caused when encodings are missing characters, which is common with rare or antiquated characters that are still used in personal or place names. Examples of this are Taiwanese politicians Wang Chien-shien 's "煊", Yu Shyi-kun 's "堃" and singer David Tao 's "喆" missing in Big5, ex-PRC Premier Zhu Rongji 's "镕" missing in GB2312, copyright symbol "©" missing in GBK.
Newspapers have dealt with this problem in various ways, including using software to combine two existing, similar characters; using a picture of the personality; or simply substituting a homophone for the rare character in the hope that the reader would be able to make the correct inference.

Indic text

A similar effect can occur in Brahmic or Indic scripts of South Asia, used in such Indo-Aryan or Indic languages as Hindustani, Bengali, Punjabi, Marathi, and others, even if the character set employed is properly recognized by the application. This is because, in many Indic scripts, the rules by which individual letter symbols combine to create symbols for syllables may not be properly understood by a computer missing the appropriate software, even if the glyphs for the individual letter forms are available.
A particularly notable example of this is the old Wikipedia logo, which attempts to show the character analogous to "wi" on each of many puzzle pieces. The puzzle piece meant to bear the Devanagari character for "wi" instead used to display the "wa" character followed by an unpaired "i" modifier vowel, easily recognizable as mojibake generated by a computer not configured to display Indic text. The logo as redesigned as of 2010 has fixed these errors.
The idea of Plain Text requires the operating system to provide a font to display Unicode codes. This font is different from OS to OS for Singhala and it makes orthographically incorrect glyphs for some letters across all operating systems. For instance, the 'reph', the short form for 'r' is a diacritic that normally goes on top of a plain letter. However, it is wrong to go on top of some letters like 'ya' or 'la' but it happens in all operating systems. This appears to be a fault of internal programming of the fonts. In Macintosh / iPhone, the muurdhaja l and 'u' combination and its long form both yield wrong shapes.
Some Indic and Indic-derived scripts, most notably Lao, were not officially supported by Windows XP until the release of Vista. However, various sites have made free-to-download fonts.

Myanmar / Burmese

Due to Western sanctions and the late arrival of Burmese language support in computers, much of the early Burmese localization were homegrown without international cooperation. The prevailing popular means of Burmese support is via Zawgyi font, a font that was created as Unicode font but only partially Unicode compliant. In Zawgyi font, some codepoints for Burmese script were implemented as specified in Unicode but others were not. The Unicode Consortium refers to this as ad hoc font encodings. With the advent of mobile phones, mobile vendors such as Samsung and Huawei simply replace the Unicode compliant system fonts with Zawgyi versions.
This ad hoc font encodings basically introduced mojibake into Unicode text corpus. Communications between users of Zawgyi and Unicode would see each other's message as garbled text. To get around this issue, content producers would make posts in both Zawgyi and Unicode. Myanmar government has designated Oct 1 2019 as "U-Day" to officially switch to Unicode. The full transition is estimated to take two years.

African languages

In certain writing systems of Africa, unencoded text is unreadable. Texts that may produce mojibake include those from the Horn of Africa such as the Ge'ez script in Ethiopia and Eritrea, used for Amharic, Tigre, and other languages, and the Somali language, which employs the Osmanya alphabet. In Southern Africa, the Mwangwego alphabet is used to write languages of Malawi and the Mandombe alphabet was created for the Democratic Republic of the Congo, but these are not generally supported. Various other writing systems native to West Africa present similar problems, such as the N'Ko alphabet, used for Manding languages in Guinea, and the Vai syllabary, used in Liberia.

Arabic

Another affected language is Arabic. The text becomes unreadable when the encodings do not match.

Examples

The examples in this article do not have UTF-8 as browser setting, because UTF-8 is easily recognisable, so if a browser supports UTF-8 it should recognise it automatically, and not try to interpret something else as UTF-8.