Tamil All Character Encoding (TACE16) is a scheme for encoding the Tamil script in the Private Use Area of Unicode, implementing a syllabary-based character model differing from the modified-ISCII model used by Unicode's existing Tamil implementation.
The keyboard driver for this encoding scheme is available on the Tamil Virtual Academy website for free. It uses Tamil 99 and Tamil Typewriter keyboard layouts, which are approved by the Government of Tamil Nadu, and maps the input keystrokes to its corresponding characters of the TACE16 scheme. To read files created using TACE16, the corresponding Unicode Tamil fonts are also available on the same website. These fonts map glyphs for characters of TACE16 format, but also for the Unicode block for both ASCII and Tamil characters, so that they can provide backward compatibility for reading existing files which are created using the Tamil Unicode block.
All the characters of this encoding scheme are located in the private use area of the Basic Multilingual Plane of Unicode's Universal Coded Character Set.
|Syllabograms with irregular glyphs, which inherently need to be handled individually by a font.[a]
|Newly added. Not present in Unicode version 6.3.
|Corresponds to a character in the Tamil Supplement block, added in Unicode version 12 (2019)
|Allocated for research (NLP)
The existing Unicode character model for Tamil is, like most of Indic Unicode,[b] an abugida-based model derived from ISCII. It been criticized for several reasons.
Unicode represents only 31 Tamil base characters as single code points, out of 247 grapheme clusters. These include stand-alone vowels, and 23 basic consonant glyphs (which, due to not bearing a virama, nonetheless denote a syllable with both a consonant and a vowel when used on their own). The others are represented as sequences of code points, requiring software support for advanced typography features (such as Apple Advanced Typography, Graphite, or OpenType advanced typography) to render correctly. This also requires the use of invisible zero-width joiner and zero-width non-joiner characters in places where the desired grapheme cluster would otherwise be ambiguous. This complexity can result in security vulnerabilities and ambiguous combinations, can require the use of an exception table to forbid invalid combinations of code points, and can necessitate the use of string normalization to compare two strings for equality.
Additionally, since syllables with both a consonant and a vowel form 64 to 70% of Tamil text, an abugida-based model which encodes the consonant and vowel parts as separate code points is inefficient, in terms of how long a string needs to be to contain a given piece of text, in comparison with a syllabary-based model.
Furthermore, ISCII is primarily an encoding of Devanagari, and the ISCII encodings of other Brahmic scripts (including Tamil) encode characters over the code points of the corresponding characters in Devanagari ISCII. Although Unicode encodes the Brahmic scripts separately from one another, the Tamil block mirrors the ISCII layout (with Devanagari-style character ordering, and reserved space in positions corresponding to Devanagari characters with no Tamil equivalent); consequently, the characters are not in the natural sequence order, and strings collated by code point (analogous to "ASCIIbetical" sorting of English text) will not produce the expected sorting order. It requires a complex collation algorithm for arranging them in the natural order.
The following data provides a comparison of current Unicode Tamil vs. TACE16 on e-governance and browsing:
TACE16 provides performance improvements in processing time and processing space. It encompasses all of the general Tamil text; it is sequential; and it is unambiguous, with any point corresponding to only one character. The TACE16 system takes fewer instruction cycles than Unicode Tamil, and also allows programming based on Tamil grammar, which needs extra framework development in Unicode Tamil.
The Unicode Consortium publishes a dedicated FAQ page on the Tamil script which responds to some of the criticisms. In defence of the ISCII model, the Consortium notes that expert linguists, typographers and programmers were involved in its development, but acknowledges that compromises were made due to ISCII being constrained to single-byte extended ASCII. The Consortium points out that Unicode Tamil is now implemented by all major operating systems and web browsers, and maintains that it should be used in open interchange contexts, such as online, since tools such as search engines would not necessarily be able to identify or interpret a sequence of Unicode private-use code points as Tamil text. However, the Consortium does not object to the use of Private-Use Area schemes, including TACE16, internally to particular processes for which they are useful. In particular, it highlights that both markup schemes and alternative encoding schemes may be used by researchers for specialised purposes such as natural-language processing.
Unicode defines normative named-sequences for all Tamil pure consonants and syllables which are represented with sequences of more than one code point, and a dedicated table is published as part of the Unicode Standard listing all of these sequences, in their traditional order, along with their correct glyphs. The Consortium points out that it has been open to accepting proposals for characters for which no existing Unicode representation exists: for example, adding several historical fractions and other symbols as the Tamil Supplement block in version 12.0 in 2019.
Regarding collation, the Consortium argues that obtaining the correct result from sorting by code point is the exception rather than the rule, highlighting that, in unmodified ASCIIbetical ordering, the uppercase Latin letter Z sorts before the lowercase letter a, and also highlighting that collation rules often differ by language (see e.g. ö). Regarding space efficiency, the Consortium argues that storage space and bandwidth taken up by text is usually far overshadowed by other accompanying media such as images and video, and that text content performs well under general-purpose compression methods such as ZIP Deflate.
When first published (version 1.0.0), Unicode made only limited stability guarantees. As such, the original Tibetan block was deleted in version 1.0.1 (and its space has since been occupied by the Myanmar block), and the original block for Korean syllables was deleted in version 2.0 (and is now occupied by CJK Unified Ideographs Extension A). Both the current Hangul Syllables block for Korean syllables, and the current Tibetan block, date back to Unicode 2.0. This was done on the assumption that little or no existing content using Unicode for those writing systems existed, since it would break compatibility with all existing Unicode content in, and input methods for, those writing systems. After this so-dubbed "Korean mess", the responsible committees pledged not to make such a compatibility-breaking change ever again, which now forms part of the Unicode Stability Policy.
This stability policy has been upheld ever since, in spite of demands to re-encode or change the character model for both Tibetan and Korean a second time, made by China and North Korea respectively. Likewise in relation to Tamil, the Consortium emphasises the "crucial issue of maintaining the stability of the standard for existing implementations", and argues that "the resulting costs and impact of destabilizing the standard" would substantially outweigh any efficiency benefits in processing speed or storage space.
There was a proposal to re-encode Tamil that was rejected by Unicode, who said that the re-encoding would be damaging and that there was no convincing evidence that Unicode Tamil encoding is deficient.
The Open-Tamil project provides many of the common operations. It claims Level-1 compliance of Tamil text processing without using TACE16, but is written on top of extra programming logic which is needed for Unicode Tamil.