Charset detection (original) (raw)
字符编码探测、字符集探测又稱為代码页检测是個启发式猜测代表文字的一系列字节的字符编码。其算法通常依据对字节样式的统计分析。这并不是一个万无一失的方法因为它依赖于统计数据——比如有些Windows版本会误把ASCII编码的""当作中文UTF-16LE。 为数不多的能可靠探测的情况之一是探测UTF-8。这是因为UTF-8中有大量的无效字节序列,所以当其他编码方式使用字节中的高位bit时极不可能通过UTF-8有效性测试。不幸的是不完善的字符集探测程序不优先进行可靠的UTF-8测试于是把UTF-8定为其他编码。
Property | Value |
---|---|
dbo:abstract | Character encoding detection, charset detection, or code page detection is the process of heuristically guessing the character encoding of a series of bytes that represent text. The technique is recognised to be unreliable and is only used when specific metadata, such as a HTTP Content-Type: header is either not available, or is assumed to be untrustworthy. This algorithm usually involves statistical analysis of byte patterns, like frequency distribution of trigraphs of various languages encoded in each code page that will be detected; such statistical analysis can also be used to perform language detection. This process is not foolproof because it depends on statistical data. In general, incorrect charset detection leads to mojibake. One of the few cases where charset detection works reliably is detecting UTF-8. This is due to the large percentage of invalid byte sequences in UTF-8, so that text in any other encoding that uses bytes with the high bit set is extremely unlikely to pass a UTF-8 validity test. However, badly written charset detection routines do not run the reliable UTF-8 test first, and may decide that UTF-8 is some other encoding. For example, it was common that web sites in UTF-8 containing the name of the German city München were shown as München, due to the code deciding it was an ISO-8859 encoding before even testing to see if it was UTF-8. UTF-16 is fairly reliable to detect due to the high number of newlines (U+000A) and spaces (U+0020) that should be found when dividing the data into 16-bit words, and large numbers of NUL bytes all at even or odd locations. Common characters must be checked for, relying on a test to see that the text is valid UTF-16 fails: the Windows operating system would mis-detect the phrase "Bush hid the facts" (without a newline) in ASCII as Chinese UTF-16LE, since all the bytes for assigned Unicode characters in UTF-16. Charset detection is particularly unreliable in Europe, in an environment of mixed ISO-8859 encodings. These are closely related eight-bit encodings that share an overlap in their lower half with ASCII and all arrangements of bytes are valid. There is no technical way to tell these encodings apart and recognising them relies on identifying language features, such as letter frequencies or spellings. Due to the unreliability of heuristic detection, it is better to properly label datasets with the correct encoding. HTML documents served across the web by HTTP should have their encoding stated out-of-band using the Content-Type: header. Content-Type: text/html;charset=UTF-8 An isolated HTML document, such as one being edited as a file on disk, may imply such a header by a meta tag within the file: or with a new meta type in HTML5 If the document is Unicode, then some UTF encodings explicitly label the document with an embedded initial byte order mark (BOM). (en) 字符编码探测、字符集探测又稱為代码页检测是個启发式猜测代表文字的一系列字节的字符编码。其算法通常依据对字节样式的统计分析。这并不是一个万无一失的方法因为它依赖于统计数据——比如有些Windows版本会误把ASCII编码的""当作中文UTF-16LE。 为数不多的能可靠探测的情况之一是探测UTF-8。这是因为UTF-8中有大量的无效字节序列,所以当其他编码方式使用字节中的高位bit时极不可能通过UTF-8有效性测试。不幸的是不完善的字符集探测程序不优先进行可靠的UTF-8测试于是把UTF-8定为其他编码。 (zh) |
dbo:wikiPageExternalLink | http://chsdet.sourceforge.net/ http://cpdetector.sourceforge.net/usage.shtml http://www.joshisanerd.com/projects/hebci/ https://www-archive.mozilla.org/projects/intl/chardet.html https://web.archive.org/web/20101018031124/http:/jchardet.sourceforge.net/ https://web.archive.org/web/20101217195221/http:/icu-project.org/apiref/icu4c/ucsdet_8h.html https://www.freedesktop.org/wiki/Software/uchardet/ http://msdn.microsoft.com/en-us/library/aa920101.aspx http://www.fas.org/irp/doddir/army/fm34-40-2/appb.pdf https://github.com/errepi/ude |
dbo:wikiPageID | 19263080 (xsd:integer) |
dbo:wikiPageLength | 4871 (xsd:nonNegativeInteger) |
dbo:wikiPageRevisionID | 1046959984 (xsd:integer) |
dbo:wikiPageWikiLink | dbr:Mojibake dbr:Character_encoding dbr:UTF-16 dbr:UTF-16LE dbr:UTF-8 dbr:München dbr:Content_sniffing dbc:Character_encoding dbr:Language_identification dbr:ASCII dbr:HTTP dbr:International_Components_for_Unicode dbr:Heuristic dbr:Digraphs_and_trigraphs dbr:Bush_hid_the_facts dbr:Byte_order_mark dbr:Metadata dbr:Microsoft_Windows dbr:Browser_sniffing dbr:Out-of-band_data dbr:Language_detection dbr:ISO-8859 |
dbp:wikiPageUsesTemplate | dbt:Mono dbt:Reflist dbt:Character_encoding |
dcterms:subject | dbc:Character_encoding |
gold:hypernym | dbr:Process |
rdf:type | dbo:Election |
rdfs:comment | 字符编码探测、字符集探测又稱為代码页检测是個启发式猜测代表文字的一系列字节的字符编码。其算法通常依据对字节样式的统计分析。这并不是一个万无一失的方法因为它依赖于统计数据——比如有些Windows版本会误把ASCII编码的""当作中文UTF-16LE。 为数不多的能可靠探测的情况之一是探测UTF-8。这是因为UTF-8中有大量的无效字节序列,所以当其他编码方式使用字节中的高位bit时极不可能通过UTF-8有效性测试。不幸的是不完善的字符集探测程序不优先进行可靠的UTF-8测试于是把UTF-8定为其他编码。 (zh) Character encoding detection, charset detection, or code page detection is the process of heuristically guessing the character encoding of a series of bytes that represent text. The technique is recognised to be unreliable and is only used when specific metadata, such as a HTTP Content-Type: header is either not available, or is assumed to be untrustworthy. In general, incorrect charset detection leads to mojibake. Content-Type: text/html;charset=UTF-8 An isolated HTML document, such as one being edited as a file on disk, may imply such a header by a meta tag within the file: (en) |
rdfs:label | Charset detection (en) 字符集探测 (zh) |
owl:sameAs | freebase:Charset detection wikidata:Charset detection dbpedia-zh:Charset detection https://global.dbpedia.org/id/4hpNs |
prov:wasDerivedFrom | wikipedia-en:Charset_detection?oldid=1046959984&ns=0 |
foaf:isPrimaryTopicOf | wikipedia-en:Charset_detection |
is dbo:wikiPageRedirects of | dbr:Codepage_sniffing dbr:Character_encoding_detection |
is dbo:wikiPageWikiLink of | dbr:Mojibake dbr:Unicode_and_HTML dbr:SubRip dbr:Codepage_sniffing dbr:Language_identification dbr:Character_encoding_detection dbr:Plain_text dbr:Bush_hid_the_facts dbr:Extended_ASCII |
is rdfs:seeAlso of | dbr:Content_sniffing |
is foaf:primaryTopic of | wikipedia-en:Charset_detection |