Character Model for the World Wide Web: String Matching and Searching (original) (raw)
Abstract
This document builds upon on Character Model for the World Wide Web 1.0: Fundamentals [CHARMOD] to provide authors of specifications, software developers, and content developers a common reference on string identity matching on the World Wide Web and thereby increase interoperability.
Status of This Document
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
Note
This version of the document represents a significant change from theearlier editions. Much of the content is changed and the recommendations are significantly altered. This fact is reflected in a change to the name of the document from "Character Model: Normalization".
Note
Sending comments on this document
If you wish to make comments regarding this document, please raise them as github issues against the latest dated version in /TR. Only send comments by email if you are unable to raise issues on github (see links below). All comments are welcome.
To make it easier to track comments, please raise separate issues or emails for each comment, and point to the section you are commenting on using a URL for the dated version of the document.
This document was published by the Internationalization Working Group as a Working Draft. If you wish to make comments regarding this document, please send them towww-international@w3.org (subscribe,archives). All comments are welcome.
Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the5 February 2004 W3C Patent Policy. The group does not expect this document to become a W3C Recommendation.W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes containsEssential Claim(s) must disclose the information in accordance withsection 6 of the W3C Patent Policy.
This document is governed by the 1 September 2015 W3C Process Document.
Table of Contents
- 1. Introduction
- 2. The String Matching Problem
- 2.1 Case Folding
- 2.2 Unicode Normalization
* 2.2.1 Canonical vs. Compatibility Equivalence
* 2.2.2 Composition vs. Decomposition
* 2.2.3 Unicode Normalization Forms
* 2.2.4 Limitations of Normalization - 2.3 Character Escapes
- 2.4 Unicode Controls and Invisible Markers
- 2.5 Legacy Character Encodings
- 2.6 Other Types of Equivalence
- 3. String Matching of Syntactic Content in Document Formats and Protocols
- 3.1 The Matching Algorithm
- 3.2 Converting to a Common Unicode Form
* 3.2.1 Choice of Normalization Form
* 3.2.2 Requirements for Resources
* 3.2.3 Non-Normalizing Specification Requirements
* 3.2.4 Unicode Normalizing Specification Requirements - 3.3 Expanding Character Escapes and Includes
- 3.4 Handling Case Folding
* 3.4.1 Requirements for Specifications
* 3.4.2 Non-Normalizing Specification Requirements - 3.5 Handling Unicode Controls and Invisible Markers
- 4. String Searching in Natural Language Content
- 5. Changes Since the Last Published Version
- 6. Acknowledgements
- A. References
1. Introduction
1.1 Goals and Scope
The goal of the Character Model for the World Wide Web is to facilitate use of the Web by all people, regardless of their language, script, writing system, and cultural conventions, in accordance with the W3C goal of universal access. One basic prerequisite to achieve this goal is to be able to transmit and process the characters used around the world in a well-defined and well-understood way.
Note
This document builds on Character Model for the World Wide Web: Fundamentals [CHARMOD]. Understanding the concepts in that document are important to being able to understand and apply this document successfully.
This part of the Character Model for the World Wide Web covers string matching—the process by which a specification or implementation defines whether two string values are the same or different from one another. It describes the ways in which texts that are semantically equivalent can be encoded differently and the impact this has on matching operations important to formal languages (such as those used in the formats and protocols that make up the Web). Finally, it discusses the problem of substring searching within documents.
The main target audience of this specification is W3C specification developers. This specification and parts of it can be referenced from other W3C specifications and it defines conformance criteria for W3C specifications, as well as other specifications.
Other audiences of this specification include software developers, content developers, and authors of specifications outside the W3C. Software developers and content developers implement and use W3C specifications. This specification defines some conformance criteria for implementations (software) and content that implement and use W3C specifications. It also helps software developers and content developers to understand the character-related provisions in W3C specifications.
The character model described in this specification provides authors of specifications, software developers, and content developers with a common reference for consistent, interoperable text manipulation on the World Wide Web. Working together, these three groups can build a globally accessible Web.
1.2 Structure of this Document
This document defines two basic building blocks for the Web related to this problem. First, it defines rules and processes for String Identity Matching in document formats. These rules are designed for the identifiers and structural markup (syntactic content) used in document formats to ensure consistent processing of each and are targeted to Specification writers. Second, it defines broader guidelines for handling user visible text, such as natural language text that forms most of the content of the Web. This section is targeted to implementers.
This document is divided into three main sections.
The first section lays out the problems involved in string matching; the effects of Unicode and case folding on these problems; and outlines the various issues and normalization mechanisms that might be used to address these issues.
The second section provides requirements and recommendations for string identity matching for use in formal languages, such as many of the document formats defined in W3C Specifications. This primarily is concerned with making the Web functional and providing document authors with consistent results.
The third section discusses considerations for the handling of content by implementations, such as browsers or text editors on the Web. This mainly is related to how and why to preserve the author's original sequences and how to search or find content in natural language text.
1.3 Background
This section provides some historical background on the topics addressed in this specification.
At the core of the character model is the Universal Character Set (UCS), defined jointly by the Unicode Standard [Unicode] and ISO/IEC 10646 [ISO10646]. In this document, Unicode is used as a synonym for the Universal Character Set. A successful character model allows Web documents authored in the world's writing systems, scripts, and languages (and on different platforms) to be exchanged, read, and searched by the Web's users around the world.
The first few chapters of the Unicode Standard [Unicode] provide useful background reading.
For information about the requirements that informed the development of important parts of this specification, see Requirements for String Identity Matching and String Indexing [CHARREQ].
1.4 Terminology and Notation
This section contains terminology and notation specific to this document.
The Web is built on text-based formats and protocols. In order to describe string matching or searching effectively, it is necessary to establish terminology that allows us to talk about the different kinds of text within a given format or protocol, as the requirements and details vary significantly.
Unicode code points are denoted as U+hhhh
, where hhhh
is a sequence of at least four, and at most six hexadecimal digits. For example, the character € EURO SIGN has the code point U+20AC.
Some characters that are used in the various examples might not appear as intended unless you have the appropriate font. Care has been taken to ensure that the examples nevertheless remain understandable.
A legacy character encoding is a character encoding not based on the Unicode character set.
A transcoder is a process that converts text between two character encodings. Most commonly in this document it refers to a process that converts from a legacy character encoding to a Unicode encoding form, such as UTF-8.
Syntactic content is any text in a document format or protocol that belongs to the structure of the format or protocol. This definition can include values that are not typically thought of as "markup", such as the name of a field in an HTTP header, as well as all of the characters that form the structure of a format or protocol. For example, < and > (as well as the element name and various attributes they surround) are part of the syntactic content in an HTML document.
Syntactic content usually is defined by a specification or specifications and includes both the defined, reserved keywords for the given protocol or format as well as string tokens and identifiers that are defined by document authors to form the structure of the document (rather than the "content" of the document).
Natural language content refers to the language-bearing content in a document and not to any of the surrounding or embedded syntactic content that form part of the document structure. You can think of it as the actual "content" of the document or the "message" in a given protocol. Note that syntactic content can contain natural language content, such as when an [HTML] img
element has an alt
attribute containing a description of the image.
Issue 1
Issue #59: should we use the term 'resource' in this document using this definition, given the wider one in the more well know RFC3986 URI spec.
A resource is a given document, file, or protocol "message" which includes both the natural language content as well as the syntactic content such as identifiers surrounding or containing it. For example, in an HTML document that also has some CSS and a few script
tags with embedded JavaScript, the entire HTML document, considered as a file, is the resource.
A user value is unreserved syntactic content in a vocabulary that is assigned by users, as distinct from reserved keywords in a given format or protocol. For example, CSS class names are part of the syntax of a CSS style sheet. They are not reserved keywords, predefined by any CSS specification. They are subject to the syntactic rules of CSS. And they may (or may not) consist of natural language tokens.
A vocabulary provides the list of reserved names as well as the set of rules and specifications controlling how user values (such as identifiers) can be assigned in a format or protocol. This can include restrictions on range, order, or type of characters that can appear in different places. For example, HTML defines the names of its elements and attributes, as well as enumerated attribute values, which defines the "vocabulary" of HTMLsyntactic content. Another example would be ECMAScript, which restricts the range of characters that can appear at the start or in the body of an identifier or variable name. It applies different rules for other cases, such as to the values of string literals.
A grapheme is a sequence of one or more Unicode characters in a visual representation of some text that a typical user would perceive as being a single unit (character). Graphemes are important for a number of text operations such as sorting or text selection, so it is necessary to be able to compute the boundaries between each user-perceived character. Unicode defines the default mechanism for computing graphemes in Unicode Standard Annex #29: Text Segmentation [UAX29] and calls this approximation a grapheme cluster. There are two types of default grapheme cluster defined. Unless otherwise noted, grapheme cluster in this document refers to an extended default grapheme cluster. (A discussion of grapheme clusters is also given in Section 2 of the Unicode Standard, [Unicode]. Cf. near the end ofSection 2.11 in version 8.0 of The Unicode Standard)
Because different natural languages have different needs, grapheme clusters can also sometimes require tailoring. For example, a Slovak user might wish to treat the default pair of grapheme clusters "ch" as a single grapheme cluster. Note that the interaction between the language of string content and the end-user's preferences might be complex.
1.4.1 Terminology Examples
This section illustrates some of the terminology defined above.
Shakespeare<p>
What’s in a name? That which we call a rose by any other name would smell as sweet.</p>
</body>
</html>
- Everything inside the black rectangle (that is, in this HTML file) is part of the resource.
- Syntactic content is shown in a monospaced font.
- Natural language content is shown in a bold blue font with a gray background.
- User values are shown in italics.
- Vocabulary is shown with red underlining.
- All of the text above (all text in a text file) makes up a resource. It's possible that a given resource will contain no natural language content at all (consider an HTML document consisting of four empty
div
elements styled to be orange rectangles). It's also possible that a resource will contain_no_ syntactic content and consist solely of natural language content: for example, a plain text file with a soliloquy from Hamlet in it. Notice too that the HTML entity’
appears in the natural language content and belongs to both the natural language content and the syntactic content in this resource.
1.5 Conformance
As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.
The key words MAY, MUST, MUST NOT, NOT RECOMMENDED, RECOMMENDED, SHOULD, and SHOULD NOT are to be interpreted as described in [RFC2119].
This specification places conformance criteria on specifications, on software (implementations) and on Web content. To aid the reader, all conformance criteria are preceded by [X] where X is one of S for specifications, I for software implementations, and C for Web content. These markers indicate the relevance of the conformance criteria and allow the reader to quickly locate relevant conformance criteria by searching through this document.
Specifications conform to this document if they:
- do not violate any conformance criteria preceded by [S] where the imperative is MUST or MUST NOT,
- document the reason for any deviation from criteria where the imperative is SHOULD, SHOULD NOT, or RECOMMENDED,
- make it a conformance requirement for implementations to conform to this document,
- make it a conformance requirement for content to conform to this document.
Software conforms to this document if it does not violate any conformance criteria preceded by [I].
Content conforms to this document if it does not violate any conformance criteria preceded by [C].
Note
NOTE: Requirements placed on specifications might indirectly cause requirements to be placed on implementations or content that claim to conform to those specifications.
Where this specification contains a procedural description, it is to be understood as a way to specify the desired external behavior. Implementations can use other means of achieving the same results, as long as observable behavior is not affected.
2. The String Matching Problem
The Web is primarily made up of document formats and protocols based on character data. These formats or protocols can be viewed as a set of text files (resources) that include some form of structural markup or syntactic content. Processing such syntactic content or document data requires string-based operations such as matching (including regular expressions), indexing, searching, sorting, and so forth.
Users, particularly implementers, sometimes have naïve expectations regarding the matching or non-matching of similar strings or of the efficacy of different transformations they might apply to text, particularly to syntactic content, but including many types of text processing on the Web.
Because fundamentally the Web is sensitive to the different ways in which text might be represented in a document, failing to consider the different ways in which the same text can be represented can confuse users or cause unexpected or frustrating results. In the sections below, this document examines the different types of text variation that affect both user perception of text on the Web and the string processing on which the Web relies.
2.1 Case Folding
Some scripts and writing systems make a distinction between UPPER, lower, and Title case characters. Most scripts, such as the Brahmic scripts of India, the Arabic script, and the non-Latin scripts used to write Chinese, Japanese, or Korean do not have a case distinction, but some important ones do. Examples of such scripts include the Latin script used in the majority of this document, as well as scripts such as Greek, Armenian, and Cyrillic.
Some document formats or protocols seek to aid interoperability or provide an aid to content authors by ignoring case variations in thevocabulary they define or in user-defined values permitted by the format or protocol. For example, this occurs when matching class names between an HTML document and its associated style sheet. Consider this HTML fragment:
The SPAN
in the stylesheet matches the span
element in the document, even though one is uppercase and the other is not.
Case folding is the process of making two texts identical which differ in case but are otherwise "the same".
Case folding might, at first, appear simple. However there are variations that need to be considered when treating the full range of Unicode in diverse languages. For more information, [Unicode] Chapter 5 (in v8.0, Section 5.18) discusses case mappings in detail.
Unicode defines the default case fold mapping for each Unicode code point. Since most scripts do not provide a case distinction, most Unicode code points do not require a case fold mapping. For those characters that have a case fold mapping, the majority have a simple, straight-forward mapping to a single matching (generally lowercase) code point. Unicode calls these the common
case fold mappings, as they are shared by Unicode's case fold mappings.
In addition to the common
case folding mappings, a few characters have a case fold mapping that would normally require more than one Unicode character. These are called the full
case fold mappings. Together with the common
case fold mappings, these provide the default case fold mapping for all of Unicode. This case fold mapping is referred to in this document as Unicode C+F.
Because some applications cannot allocate additional storage when performing a case fold operation, Unicode provides a simple
case fold mapping that maps characters that would normally map to more or fewer code points to use a single code point for comparison purposes instead. Unlike the full mapping, this mapping invariably alters the content (and potentially the meaning) of the text. This simple
case fold mapping, referred to in this document as Unicode C+S, is not appropriate for the Web.
Note that case folding removes information from a string which cannot be recovered later.
Another aspect of case folding is that it can be language sensitive. Unicode defines default case mappings for each encoded character, but these are only defaults and are not appropriate in all cases. Some languages need case-folding to be tailored to meet specific linguistic needs. One common example of this are Turkic languages written in the Latin script.
Sometimes case can vary in a way that is not semantically meaningful or is not fully under the user's control. This is particularly true when searching a document, but also applies when defining rules for matching user- or content-generated values, such as identifiers. In these situations, case-_in_sensitive matching might be desirable instead.
When defining a vocabulary, one important consideration is whether the values are restricted to the ASCII subset of Unicode or if the vocabulary permits the use of characters (such as accents on Latin letters or a broad range of Unicode including non-Latin scripts) that potentially have more complex case folding requirements. To address these different requirements, there are four types of casefold matching defined by this document for the purposes of string identity matching in document formats or protocols:
Case sensitive matching: code points are compared directly with no case folding.
ASCII case-insensitive matching compares a sequence of code points as if all ASCII code points in the range 0x41 to 0x5A (A to Z) were mapped to the corresponding code points in the range 0x61 to 0x7A (a to z). When a vocabulary is itself constrained to ASCII, ASCII case-insensitive matching can be required.
Unicode case-insensitive matching compares a sequence of code points as if the Unicode C+F Unicode-defined language-independent default case folding form mentioned above had been applied to both input sequences.
Language-sensitive case-sensitive matching is useful in the rare case where a document format or protocol contains information about the language of the syntactic content and where language-sensitive case folding might sensibly be applied. In these cases, tailoring of the Unicode case-fold mappings above to match the expectations of that language SHOULD be specified and applied. These case-fold mappings are defined in the Common Locale Data Repository [UAX35] project of the Unicode Consortium.
2.2 Unicode Normalization
A different kind of variation can occur in Unicode text: sometimes several different Unicode code point sequences can represent the same logical character. When searching or matching text by comparing code points, variations in encoding could cause text values otherwise expected to match not to match.
Consider the character Ǻ. One way to encode this character is as U+01FA LATIN LETTER CAPITAL A WITH RING ABOVE AND ACUTE. Here are some of the different character sequences that an HTML document could use to represent this character:
- Ǻ U+01FA—A "precomposed" character.
- ǺA + U+030A + U+0301— A base letter A followed by two combining marks (U+030A COMBINING RING ABOVE and U+0301 COMBINING ACUTE ACCENT)
- ǺU+00C5 + U+0301—An accented letter (U+00C5 LATIN CAPITAL LETTER A WITH RING ABOVE) followed by a combining accent (U+0301 COMBINING ACUTE ACCENT)
- ǺU+212B + U+0301—A compatibility character (U+212B ANGSTROM SIGN) followed by a combining accent (U+0301 COMBINING ACUTE ACCENT)
- ǺU+FF21 + U+030A + U+0301— A compatibility character U+FF21 FULLWIDTH LATIN LETTER CAPITAL A) followed by two combining marks (U+030A COMBINING RING ABOVE and U+0301 COMBINING ACUTE ACCENT)
Each of the above strings contains the same apparent meaning as Ǻ (U+01FA LATIN CAPITAL LETTER A WITH RING ABOVE AND ACUTE), but each one is encoded slightly differently. More variations are possible, but are omitted for brevity.
Because applications need to find the semantic equivalence in texts that use different code point sequences, Unicode defines a means of making two semantically equivalent texts identical: the Unicode Normalization Forms [UAX15].
Resources are often susceptible to the effects of these variations because their specifications and implementations on the Web do not require Unicode Normalization of the text, nor do they take into consideration the string matching algorithms used when processing the syntactic content and natural language content later. For this reason, content developers need to ensure that they have provided a consistent representation in order to avoid problems later.
However, it can be difficult for users to assure that a given resource or set of resources uses a consistent textual representation because the differences are usually not visible when viewed as text. Tools and implementations thus need to consider the difficulties experienced by users when visually or logically equivalent strings that "ought to" match (in the user's mind) are considered to be distinct values. Providing a means for users to see these differences and/or normalize them as appropriate makes it possible for end users to avoid failures that spring from invisible differences in their source documents. For example, the W3C Validator warns when an HTML document is not fully in Unicode Normalization Form C.
2.2.1 Canonical vs. Compatibility Equivalence
Unicode defines two types of equivalence between characters: canonical equivalence and compatibility equivalence.
Canonical equivalence is a fundamental equivalency between Unicode characters or sequences of Unicode characters that represent the same abstract character. When correctly displayed, these should always have the same visual appearance and behavior. Generally speaking, two canonically equivalent Unicode texts should be considered to be identical as text. Canonical decomposition removes these primary distinctions between two texts.
Examples of canonical equivalence defined by Unicode include:
- Ç vs.Ç Precomposed versus combining sequences. Some characters can be composed from a base character followed by one or more combining characters. The same characters are sometimes also encoded as a distinct "precomposed" character. In this example, the character Ç U+00C7 is canonically equivalent to the base character C U+0043 followed by the combining cedilla character ̧ U+0327. Such equivalence can extend to characters with multiple combining marks.
- q̣̇ vs.q̣̇ Order of combining marks. When a base character is modified by multiple combining marks, the order of the combining marks might not represent a distinct character. Here the sequence q̣̇(U+0071 U+0323 U+0307) and q̣̇(U+0071 U+0307 U+0323) are equivalent, even though the combining marks are in a different order. Note that this example is chosen carefully: the dot-above character and dot-below character are on opposite "sides" of the base character. The order of combining diacritics on the same side have a positional meaning.
- Ω vs.Ω Singleton mappings. These result from the need to separately encode otherwise equivalent characters to support legacy character encodings. In this example, the Ohm symbol Ω U+2126 is canonically equivalent (and identical in appearance) to the Greek letter Omega Ω U+03A9.
- 가 vs.가 Hangul. The Hangul script is used to write the Korean language. This script is constructed logically, with each syllable being a roughly-square grapheme formed from specific sub-parts that represent consonants and vowels. These specific sub-parts, called jamo, are encoded in Unicode. So too are the precomposed syllables. Thus the syllable 가 U+AC00 is canonically equivalent to its constituent jamo characters ᄀ U+1100 and ᅡ U+1161.
Compatibility equivalence is a weaker equivalence between characters or sequences of characters that represent the same abstract character, but may have a different visual appearance or behavior. Generally a compatibility decomposition removes formatting variations, such as superscript, subscript, rotated, circled, and so forth, but other variations also occur. In many cases, characters with compatibility decompositions represent a distinction of a semantic nature; replacing the use of distinct characters with their compatibility decomposition can therefore cause problems and texts that are equivalent after compatibility decomposition often were not perceived as being identical beforehand and usually should not be treated as equivalent by a formal language.
The following table illustrates various kinds of compatibility equivalence in Unicode:
Compatibility Equivalance | ||||
---|---|---|---|---|
Font variants—characters that have a specific visual appearance (generally associated with a specialized use, such as in mathematics). | ℌ | ℍ | ||
Breaking versus non-breaking—variations in breaking or joining rules, such as the difference between a normal and a non-breaking space. | U+00A0 NON-BREAKING SPACE | |||
Presentation forms of Arabic— characters that encode the specific shapes (initial, medial, final, isolated) needed by visual legacy encodings of the Arabic script. | ﻨ | ﻧ | ﻦ | ﻥ |
Circled—numbers, letters, and other characters in a circled, bullet, or other presentational form; often used for lists, footnotes, and specialized presentation | ① | ❿ | ㉄ | ㊞ |
Width variation, size, rotated presentation forms—narrow vs. wide presentational forms of characters (such as those associated with legacy multibyte encodings), as well as "rotated" presentation forms necessary for vertical text. | カ | カ | ︷ | { |
Superscripts/subscripts—superscript or subscript letters, numbers, and symbols. | ⁹ | ₉ | ª | ₊ |
Squared characters—East Asian (particularly kana) sequences encoded as a presentation form to fit in a single ideographic "cell" in text. | ㌀ | ㍐ | 🄠 | ㎉ |
Fractions—precomposed vulgar fractions, often encoded for compatibility with font glyph sets. | ¼ | ½ | ⅟ | ↉ |
Others—compatibility characters encoded for other reasons, generally for compatibility with legacy character encodings. Many of these characters are simply a sequence of characters encoded as a single presentational unit. | dž | ⑴ | ⒈ | ⻳ |
In the above table, it is important to note that the characters illustrated are actual Unicode codepoints. They were encoded into Unicode for compatibility with various legacy character encodings. They should not be confused with the normal kinds of presentational processing used on their non-compatibility counterparts.
For example, most Arabic-script text uses the characters in the Arabic script block of Unicode (starting at U+0600). The actual glyphs used to display the text are selected using fonts and text processing logic based on the position inside a word (initial, medial, final, or isolated), in a process called "shaping". In the table above, the four presentation forms of the Arabic letter NOON are shown. The characters shown are compatibility characters in the U+FE00 block, each of which represents a specific "positional" shape and each of the four code points shown have a compatibility decomposition to the regular Arabic letter U+0646 NOON.
Similarly, the variations in half-width and full-width forms and rotated characters (for use in vertical text) are encoded as separate code points, mainly for compatibility with legacy character encodings. In many cases these variations are associated with the Unicode properties described in East Asian Width [UAX11]. See also Unicode Vertical Text Layout [UTR50] for a discussion of vertical text presentation forms.
In the case of characters with compatibility decompositions, such as those shown above, the K Unicode Normalization forms convert the text to the "normal" or "expected" Unicode code point. But the existence of these compatibility characters cannot be taken to imply that similar appearance variations produced in the normal course of text layout and presentation are affected by Unicode Normalization. They are not.
2.2.2 Composition vs. Decomposition
These two types of Unicode-defined equivalence are then grouped by another pair of variations: "decomposition" and "composition". In "decomposition", separable logical parts of a visual character are broken out into a sequence of base characters and combining marks and the resulting code points are put into a fixed, canonical order. In "composition", the decomposition is performed and then any combining marks are recombined, if possible, with their base characters. Note that this does not mean that all of the combining marks have been removed from the resulting normalized text.
Note
Roughly speaking, NFC is defined such that each combining character sequence (a base character followed by one or more combining characters) is replaced, as far as possible, by a canonically equivalent precomposed character. Text in a Unicode character encoding form (such as UTF-8 or UTF-16) is said to be in NFC if it doesn't contain any combining sequence that could be replaced with a precomposed character and if any remaining combining sequence is in canonical order.
2.2.3 Unicode Normalization Forms
The Unicode Normalization Forms are named using letter codes, with 'C' standing for Composition, 'D' for Decomposition, and 'K' for Compatibility decomposition. Having converted a resource to a sequence of Unicode characters and unescaped any escape sequences, we can finally "normalize" the Unicode texts given in the example above. Here are the resulting sequences in each Unicode Normalization form for the U+01FA example given earlier:
Original Codepoints | NFC | NFD | NFKC | NFKD |
---|---|---|---|---|
Ǻ U+01FA | Ǻ U+01FA | Ǻ U+0041 U+030A U+0301 | Ǻ U+01FA | Ǻ U+0041 U+030A U+0301 |
Ǻ U+00C5 U+0301 | Ǻ U+01FA | Ǻ U+0041 U+030A U+0301 | Ǻ U+01FA | Ǻ U+0041 U+030A U+0301 |
Ǻ U+212B U+0301 | Ǻ U+01FA | Ǻ U+0041 U+030A U+0301 | Ǻ U+01FA | Ǻ U+0041 U+030A U+0301 |
Ǻ U+0041 U+030A U+0301 | Ǻ U+01FA | Ǻ U+0041 U+030A U+0301 | Ǻ U+01FA | Ǻ U+0041 U+030A U+0301 |
Ǻ U+FF21 U+030A U+0301 | Ǻ U+FF21 U+030A U+0301 | Ǻ U+FF21 U+030A U+0301 | Ǻ U+01FA | Ǻ U+0041 U+030A U+0301 |
Fig. 1 Comparison of Unicode Normalization Forms
Unicode Normalization reduces these (and other potential sequences of escapes representing the same character) to just three possible variations. However, Unicode Normalization doesn't remove all textual distinctions and sometimes the application of Unicode Normalization can remove meaning that is distinctive or meaningful in a given context. For example:
- Not all compatibility characters have a compatibility decomposition.
- Some characters that look alike or have similar semantics are actually distinct in Unicode and don't have canonical or compatibility decompositions to link them together. For example, 。 U+3002 IDEOGRAPHIC FULL STOP is used as a period at the end of sentences in languages such as Chinese or Japanese. However, it is not considered equivalent to the ASCII period character U+002E FULL STOP.
- Some character variations are not handled by the Unicode Normalization Forms. For example, UPPER, Title, and lowercase variations are a separate and distinct textual variation that must be separately handled when comparing text.
- Normalization can remove meaning. For example, the character sequence 8½ (including the character U+00BD VULGAR FRACTION ONE HALF), when normalized using one of the compatibility normalization forms (that is, NFKD or NFKC), becomes an ASCII character sequence that looks like: 81/2.
2.2.4 Limitations of Normalization
Applying a Unicode Normalization Form, even the more destructive Compatibility (K) forms, does not guarantee that two identical-looking strings use, in fact, the same underlying Unicode code points. This is sometimes surprising to software developers and others who expect that Unicode Normalization will eliminate all encoding variation. Normalization is, at best, only part of a string matching solution.
In fact, canonical normalization is not primarily about appearance: it is about folding multiple ways of encoding the same logical character or grapheme cluster to use the same code point sequence. Two (normalized) graphemes can still look exactly the same, but not represent the same logical character.
One example of this are the letters U+03A1
(Ρ), U+0420
(Р), and U+0050
(P). These letters look identical in most fonts, but they are encoded separately as part of the alphabets used in the Greek, Cyrillic, and Latin scripts respectively. Unicode Normalization will not fold these characters together.
Similar examples of identical appearance (called a homoglyph) can appear even within a single script. Confusable characters, regardless of script, can present spoofing and other security risks. For more information on homoglyphs and confusability, see [UTS39].
Finally, note that Unicode Normalization, even the K Compatibility forms, does not bring together characters that have the same intrinsic meaning or function, but which vary in appearance or usage. For example, U+002E
(.) and U+3002
(。) both function as sentence ending punctuation, but the distinction is not removed by normalization.
2.3 Character Escapes
Most document formats or protocols provide an escaping mechanism to permit the inclusion of characters that are otherwise difficult to input, process, or encode. These escaping mechanisms provide an additional equivalent means of representing characters inside a given resource. They also allow for the encoding of Unicode characters not represented in the character encoding scheme used by the document.
See also, Section 4.6 of [CHARMOD].
For example, € U+20AC EURO SIGN can also be encoded in HTML as the hexadecimal entity €
or as the decimal entity €
. In a JavaScript or JSON file, it can appear as \u20ac
while in a CSS stylesheet it can appear as \20ac
. All of these representations encode the same literal character value: €.
Character escapes are normally interpreted before a document is processed and strings within the format or protocol are matched. Returning to an example we used above:
You would expect that text to display like the following: Hello world!
In order for this to work, the user-agent (browser) had to match two strings representing the class name héllo
, even though the CSS and HTML each used a different escaping mechanism. The above fragment demonstrates one way that text can vary and still be considered "the same" according to a specification: the class name h\e9llo
matched the class name in the HTML mark-up héllo
(and would also match the literal value héllo
using the code point U+00E9).
2.4 Unicode Controls and Invisible Markers
Unicode provides a number of invisble, special-purpose characters that help document authors control the appearance or performance of text. Because these characters are invisible, users are not always aware of their presence or absence. As a result, these characters can interfere with string matching when they are part of the encoded character sequence but the expected matching text does not include them. Some examples of these characters include:
The Unicode control characters U+200D Zero Width Joiner (also known as ZWJ) and U+200C Zero Width Non-Joiner (also known as ZWNJ). Their original use was to control ligature formation— either preventing the formation of undesirable ligatures or encouraging the formation for desirable ones. However, their primary use today is control joining and shape selection in Arabic and Indic scripts. For example, ZWJ and ZWNJ are used in some Indic scripts to allow authors to specify the shape that certain conjuncts take. See the discussion in Chapter 12 of [Unicode].
Variation selectors (U+FE00 through U+FE0F) are characters used to select an alternate appearance or glyph (see Character Model: Fundamentals [CHARMOD]). For example, they are used to select between black-and-white and color emoji. These are also used in predefined ideographic variation sequences (IVS). Many examples are given in the "Standardized Variants" portion of the Unicode Character Database (UCD).
A few scripts also provide a way to encode visual variation selection: a prominent example of this are the Mongolian script's free variation selectors (U+180B through U+180D).
The character U+034F Combining Grapheme Joiner, whose name is misleading (as it does not join graphemes or affect line breaking), is used to separate characters that might otherwise be considered a grapheme for the purposes of sorting or to provide a means of maintaing certain textual distinctions when applying Unicode normalization to text.
Whitespace variations can also affect the interpretation and matching of text. For example, the various non-breaking space characters, such as NBSP, NNBSP, etc.
U+200B Zero Width Space is a character used to indicate word boundaries in text where spaces do not otherwise appear. For example, it might be used in a Thai language document to assist with word-breaking.
The U+00AD Soft Hyphen can be used in text to indidate a potential or preferred hyphenation position. It only becomes visible when the text is reflowed to wrap at that position.
In almost all of these cases, users may not be aware of or cannot be sure if a given document or text string has included or omitted one of these characters. Because text matching depends on matching the underlying codepoints, variation in the encoding of the text due to these markers can cause matches that ought to succeed to mysteriously fail (from the point of view of the user).
2.5 Legacy Character Encodings
Resources can use different character encoding schemes, including legacy character encodings, to serialize document formats on the Web. Each character encoding scheme uses different byte values and sequences to represent a given subset of the Universal Character Set.
Note
Choosing a Unicode character encoding, such as UTF-8, for all documents, formats, and protocols is strongly encouraged, since no additional utility is be gained from using a legacy character encoding and the considerations in the rest of this section would be completely avoided.
For example, € (U+20AC EURO SIGN) is encoded as the byte sequence 0xE2.82.AC
in the UTF-8
character encoding. This same character is encoded as the byte sequence 0x80
in the legacy character encoding windows-1252
. (Other legacy character encodings may not provide any byte sequence to encode the character.)
Specifications mainly address these resulting variations by considering each document to be a sequence of Unicode characters after converting from the document's character encoding (be it a legacy character encoding or a Unicode encoding such as UTF-8) and then unescaping any character escapes before proceeding to process the document.
Note
Even within a single legacy character encoding there can be variations in implementation. One famous example is the legacy Japanese encoding Shift_JIS
. Different transcoder implementations faced choices about how to map specific byte sequences to Unicode. So the byte sequence 0x80.60
(0x2141
in the JIS X 0208 character set) was mapped by some implementations to U+301C WAVE DASH while others chose U+FF5E FULL WIDTH TILDE. This means that two reasonable, self-consistent, transcoders could produce different Unicode character sequences from the same input. The Encoding [Encoding] specification exists, in part, to ensure that Web implementations use interoperable and identical mappings. However, there is no guarantee that transcoders inconsistent with the Encoding specification won't be applied to documents found on the Web or used to process data appearing in a particular document format or protocol.
2.6 Other Types of Equivalence
The preceding types of character equivalence are all based on character properties assigned by Unicode or due to the mapping of legacy character encodings to the Unicode character set. There also exist certain types of "interesting equivalence" that may be useful, particularly in searching text, that are outside of the equivalences defined by Unicode. For example, Japanese uses two syllabic scripts,hiragana
and katakana
. A user searching a document may type in one script, but wish to find equivalent text in the both scripts. These additional "text normalizations" are sometimes application, natural language, or domain specific and shouldn't be overlooked by specifications or implementations as an additional consideration.
3. String Matching of Syntactic Content in Document Formats and Protocols
In the Web environment, where strings can be encoded in different encodings, using different character sequences, and with variations such as case, it's important to establish a consistent process for evaluating string identity.
This chapter defines the implementation and requirements for string matching in syntactic content.
3.1 The Matching Algorithm
This section defines the algorithm for matching strings. String identity matching MUST be performed as if the following steps were followed:
- Conversion to a common Unicode encoding form of the strings to be compared [Encoding].
- Expansion of all character escapes and includes.
Note
The expansion of character escapes and includes is dependent on context, that is, on which syntactic content or programming language is considered to apply when the string matching operation is performed. Consider a search for the string suçon in an XML document containingsuçon
but notsuçon
. If the search is performed in a plain text editor, the context is plain text (no syntactic content or programming language applies), theç
character escape is not recognized, hence not expanded and the search fails. If the search is performed in an XML browser, the context isXML
, the character escape (defined by XML) is expanded and the search succeeds.
An intermediate case would be an XML editor that purposefully provides a view of an XML document with entity references left unexpanded. In that case, a search over that pseudo-XML view will deliberately not expand entities: in that particular context, entity references are not considered includes and need not be expanded - Perform one of the following case foldings, as appropriate:
- Case sensitive: Go to step 4.
- ASCII case folding: map all code points in the range 0x41 to 0x5A (A to Z) to the corresponding code points in the range 0x61 to 0x7A (a to z).
- Unicode case folding: map all code points to their Unicode C+F case fold equivalents. Note that this can change the length of the string.
- Remove Unicode control characters
- Test the resulting sequences of code points bit-by-bit for identity.
3.2 Converting to a Common Unicode Form
A normalizing transcoder is a transcoder that performs a conversion from a legacy character encoding to Unicode and ensures that the result is in Unicode Normalization Form C. For most legacy character encodings, it is possible to construct a normalizing transcoder (by using any transcoder followed by a normalizer); it is not possible to do so if the legacy character encoding's repertoire contains characters not represented in Unicode.
Previous versions of this document recommended the use of a normalizing transcoder when mapping from a legacy character encoding to Unicode. Normalizing transcoders are expected to produce only character sequences in Unicode Normalization Form C (NFC), although the resulting character sequence might still be partially de-normalized (for example, if it begins with a combining mark).
It turns out that, while most transcoders used on the Web produce Normalization Form C as their output, several do not. The difference is important if the transcoder is to be round-trip compatible with the source legacy character encoding or consistent with the transcoders used by browsers and other user-agents on the Web. This includes several of the transcoders in [Encoding].
[C][I] For content authors, it is RECOMMENDED that content converted from a legacy character encoding be normalized to Unicode Normalization Form C unless the mapping of specific characters interferes with the meaning.
[I] Authoring tools SHOULD provide a means of normalizing resources and warn the user when a given resource is not in Unicode Normalization Form C.
3.2.1 Choice of Normalization Form
Given that there are many character sequences that content authors or applications could choose when inputting or exchanging text, and that when providing text in a normalized form, there are different options for the normalization form to be used, what form is most appropriate for content on the Web?
For use on the Web, it is important not to lose compatibility distinctions, which are often important to the content (see Chapter 5 Characters with Compatibility Mappings in Unicode in XML and other Markup Languages [UNICODE-XML] for a discussion). The NFKD and NFKC normalization forms are therefore excluded.
Among the remaining two forms, NFC has the advantage that almost all legacy data (if transcoded trivially, one-to-one, to a Unicode encoding), as well as data created by current software, is already in this form; NFC also has a slight compactness advantage and is a better match to user expectations with respect to the character vs.grapheme issue. This document therefore recommends, when possible, that all content be stored and exchanged in Unicode Normalization Form C (NFC).
3.2.2 Requirements for Resources
These requirements pertain to the authoring and creation of documents and are intended as guidelines for resource authors.
[C] Resources SHOULD be produced, stored, and exchanged in Unicode Normalization Form C (NFC).
Note
In order to be processed correctly a resource must use a consistent sequence of code points to represent text. While content can be in any normalization form or may use a de-normalized (but valid) Unicode character sequence, inconsistency of representation will cause implementations to treat the different sequence as "different". The best way to ensure consistent selection, access, extraction, processing, or display is to always use NFC.
[I] Implementations MUST NOT normalize any resource during processing, storage, or exchange except with explicit permission from the user.
Note
The [Encoding] specification includes a number of transcoders that do not produce Unicode text in a normalized form when converting to Unicode from a legacy character encoding. This is necessary to preserve round-trip behavior and other character distinctions. Indeed, many compatibility characters in Unicode exist solely for round-trip conversion from legacy encodings. Earlier versions of this specification recommended or required that implementations use a normalizing transcoder that produced Unicode Normalization Form C (NFC), but, given that this is at odds with how transcoders are actually implemented, this version no longer includes this requirement. Bear in mind that most transcoders produce NFC output and that even those transcoders that do not produce NFC for all characters mainly produce NFC for the preponderence of characters. In particular, there are no commonly-used transcoders that produce decomposed forms where precomposed forms exist or which produce a different combining character sequence from the normalized sequence.
[C] Authors SHOULD NOT include combining marks without a preceding base character in a resource.
There can be exceptions to this. For example, when making a list of characters (such as a list of [Unicode] characters), an author might want to use combining marks without a corresponding base character. However, use of a combining mark without a base character can cause unintentional display or, with naive implementations that combine the combining mark with adjacent syntactic content or other natural language content, processing problems. For example, if you were to use a combining mark, such as the character U+301 Combining Acute Accent, as the start of a "class" attribute value in HTML, the class name might not display properly in your editor.
[S] Specifications of text-based formats and protocols MAY specify that all or part of the textual content of that format or protocol is normalized using Unicode Normalization Form C (NFC).
Specifications are generally discouraged from requiring formats or protocols to store or exchange data in a normalized form unless there are specific, clear reasons why the additional requirement is necessary. As many document formats on the Web do not require normalization, content authors might occasionally rely on denormalized character sequences and a normalization step could negatively affect such content.
Note
Requiring NFC requires additional care on the part of the specification developer, as content on the Web generally is not in a known normalization state. Boundary and error conditions for denormalized content need to be carefully considered and well specified in these cases.
3.2.3 Non-Normalizing Specification Requirements
The following requirements pertain to any specification that specifies explicitly that normalization is not to be applied automatically to content (which SHOULD include all new specifications):
[S] Specifications that do not normalize MUST document or provide a health-warning if canonically equivalent but disjoint Unicode character sequences represent a security issue.
[S][I] Specifications and implementations MUST NOT assume that content is in any particular normalization form.
The normalization form or lack of normalization for any given content has to be considered intentional in these cases.
[I] Implementations MUST NOT alter the normalization form of content being exchanged, read, parsed, or processed except when required to do so as a side-effect of transcoding the content to a Unicode character encoding, as content might depend on the de-normalized representation.
Issue 2
The following requirement was noted by Mati as being problematic. It was not marked with mustard and needs further consideration.
[S] Specifications MUST specify that string matching takes the form of "code point-by-code point" comparison of the Unicode character sequence, or, if a specific Unicode character encoding is specified, code unit-by-code unit comparison of the sequences.
Issue 3
Following requirements added 2013-10-29. Needs discussion of regular expressions.
[S][I] Specifications that define a regular expression syntax_MUST_ provide at least Basic Unicode Level 1 support per [UTS18] and SHOULD provide Extended or Tailored (Levels 2 and 3) support.
3.2.4 Unicode Normalizing Specification Requirements
This section contains requirements for specifications of text-based formats and protocols that define Unicode Normalization as a requirement. New specifications SHOULD NOT require normalization unless special circumstances apply.
[S] Specifications of text-based formats and protocols that, as part of their syntax definition, require that the text be in normalized form MUST define string matching in terms of normalized string comparison and MUST define the normalized form to be NFC.
[S] [I] A normalizing text-processing component which receives suspect text MUST NOT perform any normalization-sensitive operations unless it has first either confirmed through inspection that the text is in normalized form or it has re-normalized the text itself. Private agreements MAY, however, be created within private systems which are not subject to these rules, but any externally observable results MUST be the same as if the rules had been obeyed.
[I] A normalizing text-processing component which modifies text and performs normalization-sensitive operations MUST behave as if normalization took place after each modification, so that any subsequent normalization-sensitive operations always behave as if they were dealing with normalized text.
[S] Specifications of text-based languages and protocols SHOULD define precisely the construct boundaries necessary to obtain a complete definition of full-normalization. These definitions_SHOULD_ include at least the boundaries between syntactic content and character data as well as entity boundaries (if the language has any include mechanism) , SHOULD include any other boundary that may create denormalization when instances of the language are processed, but SHOULD NOT include character escapes designed to express arbitrary characters.
[I] Authoring tool implementations for a formal language that does not mandate full-normalization SHOULD either prevent users from creating content with composing characters at the beginning of constructs that may be significant, such as at the beginning of an entity that will be included, immediately after a construct that causes inclusion or immediately after syntactic content, or SHOULD warn users when they do so.
[S] Where operations can produce denormalized output from normalized text input, specifications of API components (functions/methods) that implement these operations MUST define whether normalization is the responsibility of the caller or the callee. Specifications MAY state that performing normalization is optional for some API components; in this case the default SHOULD be that normalization is performed, and an explicit option SHOULD be used to switch normalization off. Specifications SHOULD NOT make the implementation of normalization optional.
[S] Specifications that define a mechanism (for example an API or a defining language) for producing textual data object SHOULD require that the final output of this mechanism be normalized.
3.3 Expanding Character Escapes and Includes
Most document formats and protocols provide a means for encoding characters or including external data, including text, into a resource. This is discussed in detail in Section 4.6 of [CHARMOD] as well as above.
When performing matching, it is important to know when to interpret character escapes so that a match succeeds (or fails) appropriately. Normally, escapes, references, and includes are processed or expanded before performing matching, since these syntaxes exist to allow difficult-to-encode sequences to be put into a document conveniently.
When processing the syntax of a document format...
When performing a match on syntactic content...
When performing a match on natural language content...
3.4 Handling Case Folding
As described above, one important consideration in string identity matching is whether the comparison is case sensitive or case insensitive.
[S] Case sensitive matching is_RECOMMENDED_ as the default for new protocols and formats.
However, cases exist in which case-insensitivity is desirable.
Where case-insensitive matching is desired, there are several implementation choices that a formal language needs to consider. If the vocabulary of strings to be compared is limited to the Basic Latin (ASCII) subset of Unicode, ASCII case-insensitive matching MAY be used.
If the vocabulary of strings to be compared is not limited, then ASCII case-insensitive matching MUST NOT be used. Unicode case-insensitive matching MUST be applied, even if the vocabulary does not allow the full range of Unicode.
Unicode case-insensitive matching can take several forms. Unicode defines the "common" (C) casefoldings for characters that always have 1:1 mappings of the character to its case folded form and this covers the majority of characters that have a case folding. A few characters in Unicode have a 1:many case folding. This 1:many mapping is called the "full" (F) case fold mapping. For compatibility with certain types of implementation, Unicode also defines a "simple" (S) case fold that is always 1:1.
Because the "simple" case-fold mapping removes information that can be important to forming an identity match, the "Common plus Full" (or "Unicode C+F") case fold mapping is RECOMMENDED for Unicode case-insensitive matching.
A vocabulary is considered to be "ASCII-only" if and only if all tokens and identifiers are defined by the specification directly and these identifiers or tokens use only the Basic Latin subset of Unicode. If user-defined identifiers are permitted, the full range of Unicode characters (limited, as appropriate, for security or interchange concerns, see [UTR36]) SHOULD be allowed and Unicode case insensitivity used for identity matching.
ASCII case-insensitive matching MUST only be applied to vocabularies that are restricted to ASCII. Unicode case-insensitivity MUST be used for all other vocabularies.
Note that an ASCII-only vocabulary can exist inside a document format or protocol that allows a larger range of Unicode in identifiers or values.
Issue 5
Insert example from CSS here.
Case sensitive matching is RECOMMENDED as the default for any new protocol or format.
Case-sensitive matching is the easiest to implement and introduces the least potential for confusion, since it generally consists of a comparison of the underlying Unicode code point sequence. Because it is not affected by considerations such as language-specific case mappings, it produces the least surprise for document authors that have included words (such as the Turkish example above) in their syntactic content.
If the vocabulary is not restricted to ASCII or permits user-defined values that use a broader range of Unicode, ASCII case-insensitive matching MUST NOT be required.
[S][I] The Unicode C+F case-fold form is RECOMMENDED as the case-insensitive matching for vocabularies. The Unicode C+S form MUST NOT be used for string identity matching on the Web.
Language-sensitive case-sensitive matching in document formats and protocols is NOT RECOMMENDED because language information can be hard to obtain, verify, or manage and the resulting operations can produce results that frustrate users.
[C] Identifiers SHOULD use consistent case (upper, lower, mixed case) to facilitate matching, even if case-insensitive matching is supported by the format or implementation.
3.4.1 Requirements for Specifications
These requirements pertain to specifications for document formats or programming/scripting languages and their implementations.
[S][I] Specifications and implementations that define string matching as part of the definition of a format, protocol, or formal language (which might include operations such as parsing, matching, tokenizing, etc.) MUST define the criteria and matching forms used. These MUST be one of:
- Case-sensitive
- Unicode case-insensitive using Unicode case-folding C+F
- ASCII case-insensitive
[S] Specifications SHOULD NOT specify case-insensitive comparison of strings.
[S] Specifications that specify case-insensitive comparison for non-ASCII vocabularies SHOULD specify Unicode case-folding C+F.
In some limited cases, locale- or language-specific tailoring might also be appropriate. However, such cases are generally linked to natural language processing operations. Because they produce potentially different results from the generic case folding rules, these should be avoided in formal languages, where predictability is at a premium.
[S] Specifications MAY specify ASCII case-insensitive comparison for portions of a format or protocol that are restricted to an ASCII-only vocabulary.
This requirement applies to formal languages whose keywords are all ASCII and which do not allow user-defined names or identifiers. An example of this is HTML, which defines the use of ASCII case-insensitive comparison for element and attribute names defined by the HTML specification.
[S][I] Specifications and implementations MUST NOT specify ASCII-only case-insensitive matching for values or constructs that permit non-ASCII characters.
3.4.2 Non-Normalizing Specification Requirements
[S][I] For vocabularies and values that are not restricted to Basic Latin (ASCII), case-insensitive matching MUST specify either Unicode C+F string comparison or language-sensitive string comparison.
3.5 Handling Unicode Controls and Invisible Markers
Applications that do string matching SHOULD ignore Unicode formatting controls such as variation selectors; grapheme or word joiners; or other non-semantic controls.
4. String Searching in Natural Language Content
Many Web implementations and applications have a different sort of string matching requirement from the one described above: the need for users to search documents for particular words or phrases of text. This section addresses the various considerations that an implementer might need to consider when implementing natural language text processing on the Web other than that mandated by a formal language or document format.
There are several different kinds of string searching.
When you are using a search engine, you are generally using a form of full text search. Full text search generally breaks natural language text into word segments and may apply complex processing to get at the semantic "root" values of words. For example, if the user searches for "run", you might want to find words like "running", "ran", or "runs" in addition to the actual search term "run". This process, naturally, is sensitive to language, context, and many other aspects of textual variation. It is also beyond the scope of this document.
Another form of string searching, which we'll concern ourselves with here, is sub-string matching or "find" operations. This is the direct searching of the body or "corpus" of a document with the user's input. Find operations can have different options or implementation details, such as the addition or removal of case sensitivity, or whether the feature supports different aspects of a regular expression language or "wildcards".
4.1 Considerations for Matching Natural Language Content
Issue 6
This section was identified as a new area needing document as part of the overall rearchitecting of the document. The text here is incomplete and needs further development. Contributions from the community are invited.
Searching content (one example is using the "find" command in your browser) generates different user expectations and thus has different requirements from the need for absolute identity matching needed by document formats and protocols. Searching text has different contextual needs and often provides different features.
One description of Unicode string searching can be found in Section 8 (Searching and Matching) of [UTS10].
One of the primary considerations for string searching is that, quite often, the user's input is not identical to the way that the text is encoded in the text being searched. Users generally expect matching to be more "promiscuous", particularly when they don't add additional effort to their input. For example, they expect a term entered in lowercase to match uppercase equivalents. Conversely, when the user expends more effort on the input—by using the shift key to produce uppercase or by entering a letter with diacritics instead of just the base letter—they expect their search results to match (only) their more-specific input.
This effect might vary depending on context as well. For example, a person using a physical keyboard may have direct access to accented letters, while a virtual or on-screen keyboard may require extra effort to access and select the same letters.
Consider a document containing these strings: "re-resume", "RE-RESUME", "re-résumé", and "RE-RÉSUMÉ".
In the table below, the user's input (on the left) might be considered a match for the above items as follows:
User Input | Matched Strings |
---|---|
e (lowercase 'e') | "re-resume", "RE-RESUME", "re-résumé", and "RE-RÉSUMÉ" |
E (uppercase 'E') | "RE-RESUME" and "RE-RÉSUMÉ" |
é (lowercase 'e' with acute accent) | "re-résumé" and "RE-RÉSUMÉ" |
É (uppercase 'E' with acute accent) | "RE-RÉSUMÉ" |
In addition to variations of case or the use of accents, Unicode also has an array of canonical equivalents or compatibility characters (as described in the sections above) that might impact string searching.
For example, consider the letter "K". Characters with a compatibility mapping to U+004B LATIN CAPITAL LETTER K
include:
- Ķ U+0136
- Ǩ U+01E8
- ᴷ U+1D37
- Ḱ U+1E30
- Ḳ U+1E32
- Ḵ U+1E34
- K U+212A
- Ⓚ U+24C0
- ㎅ U+3385
- ㏍ U+33CD
- ㏎ U+33CE
- K U+FF2B
- (a variety of mathematical symbols such as U+1D40A,U+1D43E,U+1D472,U+1D4A6,U+1D4DA)
- 🄚 U+1F11A
- 🄺 U+1F13A.
Other differences include Unicode Normalization forms (or lack thereof). There are also ignorable characters (such as the variation selectors), whitespace differences, bidirectional controls, and other code points that can interfere with a match.
Users might also expect certain kinds of equivalence to be applied to matching. For example, a Japanese user might expect that hiragana, katakana, and half-width compatibility katakana equivalents all match each other (regardless of which is used to perform the selection or encoded in the text).
When searching text, the concept of "grapheme boundaries" and "user-perceived characters" can be important. See Section 3 of Character Model for the World Wide Web: Fundamentals [CHARMOD] for a description. For example, if the user has entered a capital "A" into a search box, should the software find the character À (U+00C0 LATIN CAPITAL LETTER A WITH ACCENT GRAVE)? What about the character "A" followed by U+0300 (a combining accent grave)? What about writing systems, such as Devanagari, which use combining marks to suppress or express certain vowels?
Issue 7
Issue #78: Point out that the presence or absence of Arabic/Hebrew short vowels can interefere with searching.
5. Changes Since the Last Published Version
The following changes have been made since the Working Draft of 2014-07-15:
- Added this change log.
- Moved the section Unicode Normalization after the section Casefolding and adjusted text appropriately
- Added the example and explanatory text about case matching of the HTML fragment in the section Casefolding
- Added the definitions for "grapheme cluster" and "grapheme" in Terminology and Notation
- Addition of section discussing Unicode controls, including a new requirement.
- Shakespeare -> natural language content; Wildebeest -> resource; namespace -> vocabulary
- Changed order of sections in section on "The String Matching Problem"
- Edited intro and integrated the case folding text from the string matching algorithm into the case folding section.
- Replaced the table in Section 2.2.1 as a first attempt to fix the various examples we borrowed from UAX15.
- Replaced first table in normalization section with a list of examples, addressing existing ednote.
- Extensive changes to incorporate the "standard" styles for International docs.
- Added explanatory text to the compatibility equivalents examples. Added characters to the table to further illustrate each category. Removed the "note" marker around additional explanatory text and edited. Removed the ednote saying this was needed.
- Changes to SOTD and top matter to reflect new i18n publication process.
See the github commit log for more details.
6. Acknowledgements
The W3C Internationalization Working Group and Interest Group, as well as others, provided many comments and suggestions. The Working Group would like to thank: Mati Allouche, Ebrahim Byagowi, John Cowan, Martin Dürst, Behdad Esfahbod, Asmus Freitag, John Klensin, Amir Sarabadani, and all of the CharMod contributors over the many years of this document's development.
The previous version of this document was edited by:
- François Yergeau, Invited Expert (and before at Alis Technologies)
- Martin J. Dürst, (until Dec 2004 while at W3C)
- Richard Ishida, W3C (and before at Xerox)
- Misha Wolf, (until Dec 2002 while at Reuters Ltd.)
- Tex Texin, (until Dec 2004 while an Invited Expert, and before at Progress Software)
A. References
A.1 Normative references
[CHARMOD]
Martin Dürst; François Yergeau; Richard Ishida; Misha Wolf; Tex Texin et al. W3C. Character Model for the World Wide Web 1.0: Fundamentals. 15 February 2005. W3C Recommendation. URL: http://www.w3.org/TR/charmod/
[Encoding]
Anne van Kesteren; Joshua Bell; Addison Phillips. Encoding. URL: http://www.w3.org/TR/encoding/
[ISO10646]
Information Technology - Universal Multiple- Octet Coded CharacterSet (UCS) - Part 1: Architecture and Basic Multilingual Plane. ISO/IEC10646-1:1993. The current specification also takes into consideration the first five amendments to ISO/IEC 10646-1:1993. Useful roadmapsshow which scripts sit at which numeric ranges.
[RFC2119]
S. Bradner. IETF. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://tools.ietf.org/html/rfc2119
[UAX15]
Mark Davis; Ken Whistler. Unicode Consortium. Unicode Normalization Forms. 31 August 2012. Unicode Standard Annex #15. URL: http://www.unicode.org/reports/tr15
[UAX29]
Mark Davis. Unicode Standard Annex #29: Unicode Text Segmentation. URL: http://www.unicode.org/reports/tr29/
[UTS18]
Mark Davis; Andy Heninger. Unicode Technical Standard #18: Unicode Regular Expressions. URL: http://unicode.org/reports/tr18/
[Unicode]
The Unicode Consortium. The Unicode Standard. URL: http://www.unicode.org/versions/latest/
A.2 Informative references
[CHARREQ]
Martin Dürst. W3C. Requirements for String Identity Matching and String Indexing. 15 September 2009. W3C Note. URL: http://www.w3.org/TR/charreq/
[HTML]
Ian Hickson. WHATWG. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[UAX11]
Ken Lunde 小林劍. Unicode Standard Annex #11: East Asian Width. URL: http://www.unicode.org/reports/tr11/
[UAX35]
Mark Davis; CLDR committee members. Unicode Consortium. Unicode Locale Data Markup Language (LDML). 15 March 2013. Unicode Standard Annex #35. URL: http://www.unicode.org/reports/tr35/tr35-31/tr35.html
[UNICODE-XML]
Richard Ishida. W3C. Unicode in XML and other Markup Languages. 24 January 2013. W3C Note. URL: http://www.w3.org/TR/unicode-xml/
[UTR36]
Mark Davis; Michel Suignard. Unicode Technical Report #36: Unicode Security Considerations. URL: http://www.unicode.org/reports/tr36/
[UTR50]
Koji Ishii 石井宏治. Unicode Technical Report #50: Unicode Vertical Text Layout. URL: http://www.unicode.org/reports/tr50/
[UTS10]
Mark Davis; Ken Whistler; Markus Scherer. Unicode Technical Standard #10: Unicode Collation Algorithm. URL: http://www.unicode.org/reports/tr10/
[UTS39]
Mark Davis; Michel Suignard. Unicode Technical Standard #39: Unicode Security Mechanisms. URL: http://www.unicode.org/reports/tr39/
[XML10]
Tim Bray; Jean Paoli; Michael Sperberg-McQueen; Eve Maler; François Yergeau et al. W3C. Extensible Markup Language (XML) 1.0 (Fifth Edition). 26 November 2008. W3C Recommendation. URL: http://www.w3.org/TR/xml