perlunicode(1)
NAME
perlunicode - Unicode support in Perl
DESCRIPTION
Important Caveats
Unicode support is an extensive requirement. While Perl does not imple-
ment the Unicode standard or the accompanying technical reports from
cover to cover, Perl does support many Unicode features.
Input and Output Layers
Perl knows when a filehandle uses Perl's internal Unicode encodings
(UTF-8, or UTF-EBCDIC if in EBCDIC) if the filehandle is opened
with the ":utf8" layer. Other encodings can be converted to Perl's
encoding on input or from Perl's encoding on output by use of the
":encoding(...)" layer. See open.
To indicate that Perl source itself is using a particular encoding,
see encoding.
Regular Expressions
The regular expression compiler produces polymorphic opcodes. That
is, the pattern adapts to the data and automatically switches to
the Unicode character scheme when presented with Unicode data--or
instead uses a traditional byte scheme when presented with byte
data.
"use utf8" still needed to enable UTF-8/UTF-EBCDIC in scripts
As a compatibility measure, the "use utf8" pragma must be explic-
itly included to enable recognition of UTF-8 in the Perl scripts
themselves (in string or regular expression literals, or in identi-
fier names) on ASCII-based machines or to recognize UTF-EBCDIC on
EBCDIC-based machines. These are the only times when an explicit
"use utf8" is needed. See utf8.
You can also use the "encoding" pragma to change the default encod-
ing of the data in your script; see encoding.
Byte and Character Semantics
Beginning with version 5.6, Perl uses logically-wide characters to rep-
resent strings internally.
In future, Perl-level operations will be expected to work with charac-
ters rather than bytes.
However, as an interim compatibility measure, Perl aims to provide a
safe migration path from byte semantics to character semantics for pro-
grams. For operations where Perl can unambiguously decide that the
input data are characters, Perl switches to character semantics. For
operations where this determination cannot be made without additional
information from the user, Perl decides in favor of compatibility and
chooses to use byte semantics.
This behavior preserves compatibility with earlier versions of Perl,
which allowed byte semantics in Perl operations only if none of the
program's inputs were marked as being as source of Unicode character
data. Such data may come from filehandles, from calls to external pro-
grams, from information provided by the system (such as %ENV), or from
literals and constants in the source text.
On Windows platforms, if the "-C" command line switch is used or the
${^WIDE_SYSTEM_CALLS} global flag is set to 1, all system calls will
use the corresponding wide-character APIs. This feature is available
only on Windows to conform to the API standard already established for
that platform--and there are very few non-Windows platforms that have
Unicode-aware APIs.
The "bytes" pragma will always, regardless of platform, force byte
semantics in a particular lexical scope. See bytes.
The "utf8" pragma is primarily a compatibility device that enables
recognition of UTF-(8|EBCDIC) in literals encountered by the parser.
Note that this pragma is only required while Perl defaults to byte
semantics; when character semantics become the default, this pragma may
become a no-op. See utf8.
Unless explicitly stated, Perl operators use character semantics for
Unicode data and byte semantics for non-Unicode data. The decision to
use character semantics is made transparently. If input data comes
from a Unicode source--for example, if a character encoding layer is
added to a filehandle or a literal Unicode string constant appears in a
program--character semantics apply. Otherwise, byte semantics are in
effect. The "bytes" pragma should be used to force byte semantics on
Unicode data.
If strings operating under byte semantics and strings with Unicode
character data are concatenated, the new string will be upgraded to ISO
8859-1 (Latin-1), even if the old Unicode string used EBCDIC. This
translation is done without regard to the system's native 8-bit encod-
ing, so to change this for systems with non-Latin-1 and non-EBCDIC
native encodings use the "encoding" pragma. See encoding.
Under character semantics, many operations that formerly operated on
bytes now operate on characters. A character in Perl is logically just
a number ranging from 0 to 2**31 or so. Larger characters may encode
into longer sequences of bytes internally, but this internal detail is
mostly hidden for Perl code. See perluniintro for more.
Effects of Character Semantics
Character semantics have the following effects:
o Strings--including hash keys--and regular expression patterns may
contain characters that have an ordinal value larger than 255.
If you use a Unicode editor to edit your program, Unicode charac-
ters may occur directly within the literal strings in one of the
various Unicode encodings (UTF-8, UTF-EBCDIC, UCS-2, etc.), but
will be recognized as such and converted to Perl's internal repre-
sentation only if the appropriate encoding is specified.
Unicode characters can also be added to a string by using the
"\x{...}" notation. The Unicode code for the desired character, in
hexadecimal, should be placed in the braces. For instance, a smiley
face is "\x{263A}". This encoding scheme only works for characters
with a code of 0x100 or above.
Additionally, if you
use charnames ':full';
you can use the "\N{...}" notation and put the official Unicode
character name within the braces, such as "\N{WHITE SMILING FACE}".
o If an appropriate encoding is specified, identifiers within the
Perl script may contain Unicode alphanumeric characters, including
ideographs. Perl does not currently attempt to canonicalize
variable names.
o Regular expressions match characters instead of bytes. "." matches
a character instead of a byte. The "\C" pattern is provided to
force a match a single byte--a "char" in C, hence "\C".
o Character classes in regular expressions match characters instead
of bytes and match against the character properties specified in
the Unicode properties database. "\w" can be used to match a
Japanese ideograph, for instance.
o Named Unicode properties, scripts, and block ranges may be used
like character classes via the "\p{}" "matches property" construct
and the "\P{}" negation, "doesn't match property".
For instance, "\p{Lu}" matches any character with the Unicode "Lu"
(Letter, uppercase) property, while "\p{M}" matches any character
with an "M" (mark--accents and such) property. Brackets are not
required for single letter properties, so "\p{M}" is equivalent to
"\pM". Many predefined properties are available, such as "\p{Mir-
rored}" and "\p{Tibetan}".
The official Unicode script and block names have spaces and dashes
as separators, but for convenience you can use dashes, spaces, or
underbars, and case is unimportant. It is recommended, however,
that for consistency you use the following naming: the official
Unicode script, property, or block name (see below for the addi-
tional rules that apply to block names) with whitespace and dashes
removed, and the words "uppercase-first-lowercase-rest". "Latin-1
Supplement" thus becomes "Latin1Supplement".
You can also use negation in both "\p{}" and "\P{}" by introducing
a caret (^) between the first brace and the property name:
"\p{^Tamil}" is equal to "\P{Tamil}".
Here are the basic Unicode General Category properties, followed by
their long form. You can use either; "\p{Lu}" and "\p{Lowercase-
Letter}", for instance, are identical.
Short Long
L Letter
Lu UppercaseLetter
Ll LowercaseLetter
Lt TitlecaseLetter
Lm ModifierLetter
Lo OtherLetter
M Mark
Mn NonspacingMark
Mc SpacingMark
Me EnclosingMark
N Number
Nd DecimalNumber
Nl LetterNumber
No OtherNumber
P Punctuation
Pc ConnectorPunctuation
Pd DashPunctuation
Ps OpenPunctuation
Pe ClosePunctuation
Pi InitialPunctuation
(may behave like Ps or Pe depending on usage)
Pf FinalPunctuation
(may behave like Ps or Pe depending on usage)
Po OtherPunctuation
S Symbol
Sm MathSymbol
Sc CurrencySymbol
Sk ModifierSymbol
So OtherSymbol
Z Separator
Zs SpaceSeparator
Zl LineSeparator
Zp ParagraphSeparator
C Other
Cc Control
Cf Format
Cs Surrogate (not usable)
Co PrivateUse
Cn Unassigned
Single-letter properties match all characters in any of the two-
letter sub-properties starting with the same letter. "L&" is a
special case, which is an alias for "Ll", "Lu", and "Lt".
Because Perl hides the need for the user to understand the internal
representation of Unicode characters, there is no need to implement
the somewhat messy concept of surrogates. "Cs" is therefore not
supported.
Because scripts differ in their directionality--Hebrew is written
right to left, for example--Unicode supplies these properties:
Property Meaning
BidiL Left-to-Right
BidiLRE Left-to-Right Embedding
BidiLRO Left-to-Right Override
BidiR Right-to-Left
BidiAL Right-to-Left Arabic
BidiRLE Right-to-Left Embedding
BidiRLO Right-to-Left Override
BidiPDF Pop Directional Format
BidiEN European Number
BidiES European Number Separator
BidiET European Number Terminator
BidiAN Arabic Number
BidiCS Common Number Separator
BidiNSM Non-Spacing Mark
BidiBN Boundary Neutral
BidiB Paragraph Separator
BidiS Segment Separator
BidiWS Whitespace
BidiON Other Neutrals
For example, "\p{BidiR}" matches characters that are normally writ-
ten right to left.
Scripts
The script names which can be used by "\p{...}" and "\P{...}", such as
in "\p{Latin}" or "\p{Cyrillic}", are as follows:
Arabic
Armenian
Bengali
Bopomofo
Buhid
CanadianAboriginal
Cherokee
Cyrillic
Deseret
Devanagari
Ethiopic
Georgian
Gothic
Greek
Gujarati
Gurmukhi
Han
Hangul
Hanunoo
Hebrew
Hiragana
Inherited
Kannada
Katakana
Khmer
Lao
Latin
Malayalam
Mongolian
Myanmar
Ogham
OldItalic
Oriya
Runic
Sinhala
Syriac
Tagalog
Tagbanwa
Tamil
Telugu
Thaana
Thai
Tibetan
Yi
Extended property classes can supplement the basic properties, defined
by the PropList Unicode database:
ASCIIHexDigit
BidiControl
Dash
Deprecated
Diacritic
Extender
GraphemeLink
HexDigit
Hyphen
Ideographic
IDSBinaryOperator
IDSTrinaryOperator
JoinControl
LogicalOrderException
NoncharacterCodePoint
OtherAlphabetic
OtherDefaultIgnorableCodePoint
OtherGraphemeExtend
OtherLowercase
OtherMath
OtherUppercase
QuotationMark
Radical
SoftDotted
TerminalPunctuation
UnifiedIdeograph
WhiteSpace
and there are further derived properties:
Alphabetic Lu + Ll + Lt + Lm + Lo + OtherAlphabetic
Lowercase Ll + OtherLowercase
Uppercase Lu + OtherUppercase
Math Sm + OtherMath
ID_Start Lu + Ll + Lt + Lm + Lo + Nl
ID_Continue ID_Start + Mn + Mc + Nd + Pc
Any Any character
Assigned Any non-Cn character (i.e. synonym for \P{Cn})
Unassigned Synonym for \p{Cn}
Common Any character (or unassigned code point)
not explicitly assigned to a script
For backward compatibility (with Perl 5.6), all properties mentioned so
far may have "Is" prepended to their name, so "\P{IsLu}", for example,
is equal to "\P{Lu}".
Blocks
In addition to scripts, Unicode also defines blocks of characters. The
difference between scripts and blocks is that the concept of scripts is
closer to natural languages, while the concept of blocks is more of an
artificial grouping based on groups of 256 Unicode characters. For
example, the "Latin" script contains letters from many blocks but does
not contain all the characters from those blocks. It does not, for
example, contain digits, because digits are shared across many scripts.
Digits and similar groups, like punctuation, are in a category called
"Common".
For more about scripts, see the UTR #24:
http://www.unicode.org/unicode/reports/tr24/
For more about blocks, see:
http://www.unicode.org/Public/UNIDATA/Blocks.txt
Block names are given with the "In" prefix. For example, the Katakana
block is referenced via "\p{InKatakana}". The "In" prefix may be omit-
ted if there is no naming conflict with a script or any other property,
but it is recommended that "In" always be used for block tests to avoid
confusion.
These block names are supported:
InAlphabeticPresentationForms
InArabic
InArabicPresentationFormsA
InArabicPresentationFormsB
InArmenian
InArrows
InBasicLatin
InBengali
InBlockElements
InBopomofo
InBopomofoExtended
InBoxDrawing
InBraillePatterns
InBuhid
InByzantineMusicalSymbols
InCJKCompatibility
InCJKCompatibilityForms
InCJKCompatibilityIdeographs
InCJKCompatibilityIdeographsSupplement
InCJKRadicalsSupplement
InCJKSymbolsAndPunctuation
InCJKUnifiedIdeographs
InCJKUnifiedIdeographsExtensionA
InCJKUnifiedIdeographsExtensionB
InCherokee
InCombiningDiacriticalMarks
InCombiningDiacriticalMarksforSymbols
InCombiningHalfMarks
InControlPictures
InCurrencySymbols
InCyrillic
InCyrillicSupplementary
InDeseret
InDevanagari
InDingbats
InEnclosedAlphanumerics
InEnclosedCJKLettersAndMonths
InEthiopic
InGeneralPunctuation
InGeometricShapes
InGeorgian
InGothic
InGreekExtended
InGreekAndCoptic
InGujarati
InGurmukhi
InHalfwidthAndFullwidthForms
InHangulCompatibilityJamo
InHangulJamo
InHangulSyllables
InHanunoo
InHebrew
InHighPrivateUseSurrogates
InHighSurrogates
InHiragana
InIPAExtensions
InIdeographicDescriptionCharacters
InKanbun
InKangxiRadicals
InKannada
InKatakana
InKatakanaPhoneticExtensions
InKhmer
InLao
InLatin1Supplement
InLatinExtendedA
InLatinExtendedAdditional
InLatinExtendedB
InLetterlikeSymbols
InLowSurrogates
InMalayalam
InMathematicalAlphanumericSymbols
InMathematicalOperators
InMiscellaneousMathematicalSymbolsA
InMiscellaneousMathematicalSymbolsB
InMiscellaneousSymbols
InMiscellaneousTechnical
InMongolian
InMusicalSymbols
InMyanmar
InNumberForms
InOgham
InOldItalic
InOpticalCharacterRecognition
InOriya
InPrivateUseArea
InRunic
InSinhala
InSmallFormVariants
InSpacingModifierLetters
InSpecials
InSuperscriptsAndSubscripts
InSupplementalArrowsA
InSupplementalArrowsB
InSupplementalMathematicalOperators
InSupplementaryPrivateUseAreaA
InSupplementaryPrivateUseAreaB
InSyriac
InTagalog
InTagbanwa
InTags
InTamil
InTelugu
InThaana
InThai
InTibetan
InUnifiedCanadianAboriginalSyllabics
InVariationSelectors
InYiRadicals
InYiSyllables
o The special pattern "\X" matches any extended Unicode sequence--"a
combining character sequence" in Standardese--where the first char-
acter is a base character and subsequent characters are mark char-
acters that apply to the base character. "\X" is equivalent to
"(?:\PM\pM*)".
o The "tr///" operator translates characters instead of bytes. Note
that the "tr///CU" functionality has been removed. For similar
functionality see pack('U0', ...) and pack('C0', ...).
o Case translation operators use the Unicode case translation tables
when character input is provided. Note that "uc()", or "\U" in
interpolated strings, translates to uppercase, while "ucfirst", or
"\u" in interpolated strings, translates to titlecase in languages
that make the distinction.
o Most operators that deal with positions or lengths in a string will
automatically switch to using character positions, including
"chop()", "substr()", "pos()", "index()", "rindex()", "sprintf()",
"write()", and "length()". Operators that specifically do not
switch include "vec()", "pack()", and "unpack()". Operators that
really don't care include "chomp()", operators that treats strings
as a bucket of bits such as "sort()", and operators dealing with
filenames.
o The "pack()"/"unpack()" letters "c" and "C" do not change, since
they are often used for byte-oriented formats. Again, think "char"
in the C language.
There is a new "U" specifier that converts between Unicode charac-
ters and code points.
o The "chr()" and "ord()" functions work on characters, similar to
"pack("U")" and "unpack("U")", not "pack("C")" and "unpack("C")".
"pack("C")" and "unpack("C")" are methods for emulating byte-ori-
ented "chr()" and "ord()" on Unicode strings. While these methods
reveal the internal encoding of Unicode strings, that is not some-
thing one normally needs to care about at all.
o The bit string operators, "& | ^ ~", can operate on character data.
However, for backward compatibility, such as when using bit string
operations when characters are all less than 256 in ordinal value,
one should not use "~" (the bit complement) with characters of both
values less than 256 and values greater than 256. Most impor-
tantly, DeMorgan's laws ("~($x|$y) eq ~$x&~$y" and "~($x&$y) eq
~$x|~$y") will not hold. The reason for this mathematical faux pas
is that the complement cannot return both the 8-bit (byte-wide) bit
complement and the full character-wide bit complement.
o lc(), uc(), lcfirst(), and ucfirst() work for the following cases:
o the case mapping is from a single Unicode character to
another single Unicode character, or
o the case mapping is from a single Unicode character to more
than one Unicode character.
The following cases do not yet work:
o the "final sigma" (Greek), and
o anything to with locales (Lithuanian, Turkish, Azeri).
See the Unicode Technical Report #21, Case Mappings, for more
details.
o And finally, "scalar reverse()" reverses by character rather than
by byte.
User-Defined Character Properties
You can define your own character properties by defining subroutines
whose names begin with "In" or "Is". The subroutines must be visible
in the package that uses the properties. The user-defined properties
can be used in the regular expression "\p" and "\P" constructs.
The subroutines must return a specially-formatted string, with one or
more newline-separated lines. Each line must be one of the following:
o Two hexadecimal numbers separated by horizontal whitespace (space
or tabular characters) denoting a range of Unicode code points to
include.
o Something to include, prefixed by "+": a built-in character prop-
erty (prefixed by "utf8::"), to represent all the characters in
that property; two hexadecimal code points for a range; or a single
hexadecimal code point.
o Something to exclude, prefixed by "-": an existing character prop-
erty (prefixed by "utf8::"), for all the characters in that prop-
erty; two hexadecimal code points for a range; or a single hexadec-
imal code point.
o Something to negate, prefixed "!": an existing character property
(prefixed by "utf8::") for all the characters except the characters
in the property; two hexadecimal code points for a range; or a sin-
gle hexadecimal code point.
For example, to define a property that covers both the Japanese syl-
labaries (hiragana and katakana), you can define
sub InKana {
return <<END;
3040\t309F
30A0\t30FF
END
}
Imagine that the here-doc end marker is at the beginning of the line.
Now you can use "\p{InKana}" and "\P{InKana}".
You could also have used the existing block property names:
sub InKana {
return <<'END';
+utf8::InHiragana
+utf8::InKatakana
END
}
Suppose you wanted to match only the allocated characters, not the raw
block ranges: in other words, you want to remove the non-characters:
sub InKana {
return <<'END';
+utf8::InHiragana
+utf8::InKatakana
-utf8::IsCn
END
}
The negation is useful for defining (surprise!) negated classes.
sub InNotKana {
return <<'END';
!utf8::InHiragana
-utf8::InKatakana
+utf8::IsCn
END
}
Character Encodings for Input and Output
See Encode.
Unicode Regular Expression Support Level
The following list of Unicode support for regular expressions describes
all the features currently supported. The references to "Level N" and
the section numbers refer to the Unicode Technical Report 18, "Unicode
Regular Expression Guidelines".
o Level 1 - Basic Unicode Support
2.1 Hex Notation - done [1]
Named Notation - done [2]
2.2 Categories - done [3][4]
2.3 Subtraction - MISSING [5][6]
2.4 Simple Word Boundaries - done [7]
2.5 Simple Loose Matches - done [8]
2.6 End of Line - MISSING [9][10]
[ 1] \x{...}
[ 2] \N{...}
[ 3] . \p{...} \P{...}
[ 4] now scripts (see UTR#24 Script Names) in addition to blocks
[ 5] have negation
[ 6] can use regular expression look-ahead [a]
or user-defined character properties [b] to emulate subtraction
[ 7] include Letters in word characters
[ 8] note that Perl does Full case-folding in matching, not Simple:
for example U+1F88 is equivalent with U+1F000 U+03B9,
not with 1F80. This difference matters for certain Greek
capital letters with certain modifiers: the Full case-folding
decomposes the letter, while the Simple case-folding would map
it to a single character.
[ 9] see UTR#13 Unicode Newline Guidelines
[10] should do ^ and $ also on \x{85}, \x{2028} and \x{2029})
(should also affect <>, $., and script line numbers)
(the \x{85}, \x{2028} and \x{2029} do match \s)
[a] You can mimic class subtraction using lookahead. For example,
what TR18 might write as
[{Greek}-[{UNASSIGNED}]]
in Perl can be written as:
(?!\p{Unassigned})\p{InGreekAndCoptic}
(?=\p{Assigned})\p{InGreekAndCoptic}
But in this particular example, you probably really want
\p{GreekAndCoptic}
which will match assigned characters known to be part of the Greek
script.
[b] See "User-Defined Character Properties".
o Level 2 - Extended Unicode Support
3.1 Surrogates - MISSING
3.2 Canonical Equivalents - MISSING [11][12]
3.3 Locale-Independent Graphemes - MISSING [13]
3.4 Locale-Independent Words - MISSING [14]
3.5 Locale-Independent Loose Matches - MISSING [15]
[11] see UTR#15 Unicode Normalization
[12] have Unicode::Normalize but not integrated to regexes
[13] have \X but at this level . should equal that
[14] need three classes, not just \w and \W
[15] see UTR#21 Case Mappings
o Level 3 - Locale-Sensitive Support
4.1 Locale-Dependent Categories - MISSING
4.2 Locale-Dependent Graphemes - MISSING [16][17]
4.3 Locale-Dependent Words - MISSING
4.4 Locale-Dependent Loose Matches - MISSING
4.5 Locale-Dependent Ranges - MISSING
[16] see UTR#10 Unicode Collation Algorithms
[17] have Unicode::Collate but not integrated to regexes
Unicode Encodings
Unicode characters are assigned to code points, which are abstract num-
bers. To use these numbers, various encodings are needed.
o UTF-8
UTF-8 is a variable-length (1 to 6 bytes, current character alloca-
tions require 4 bytes), byte-order independent encoding. For ASCII
(and we really do mean 7-bit ASCII, not another 8-bit encoding),
UTF-8 is transparent.
The following table is from Unicode 3.2.
Code Points 1st Byte 2nd Byte 3rd Byte 4th Byte
U+0000..U+007F 00..7F
U+0080..U+07FF C2..DF 80..BF
U+0800..U+0FFF E0 A0..BF 80..BF
U+1000..U+CFFF E1..EC 80..BF 80..BF
U+D000..U+D7FF ED 80..9F 80..BF
U+D800..U+DFFF ******* ill-formed *******
U+E000..U+FFFF EE..EF 80..BF 80..BF
U+10000..U+3FFFF F0 90..BF 80..BF 80..BF
U+40000..U+FFFFF F1..F3 80..BF 80..BF 80..BF
U+100000..U+10FFFF F4 80..8F 80..BF 80..BF
Note the "A0..BF" in "U+0800..U+0FFF", the "80..9F" in
"U+D000...U+D7FF", the "90..B"F in "U+10000..U+3FFFF", and the
"80...8F" in "U+100000..U+10FFFF". The "gaps" are caused by legal
UTF-8 avoiding non-shortest encodings: it is technically possible
to UTF-8-encode a single code point in different ways, but that is
explicitly forbidden, and the shortest possible encoding should
always be used. So that's what Perl does.
Another way to look at it is via bits:
Code Points 1st Byte 2nd Byte 3rd Byte 4th Byte
0aaaaaaa 0aaaaaaa
00000bbbbbaaaaaa 110bbbbb 10aaaaaa
ccccbbbbbbaaaaaa 1110cccc 10bbbbbb 10aaaaaa
00000dddccccccbbbbbbaaaaaa 11110ddd 10cccccc 10bbbbbb 10aaaaaa
As you can see, the continuation bytes all begin with 10, and the
leading bits of the start byte tell how many bytes the are in the
encoded character.
o UTF-EBCDIC
Like UTF-8 but EBCDIC-safe, in the way that UTF-8 is ASCII-safe.
o UTF-16, UTF-16BE, UTF16-LE, Surrogates, and BOMs (Byte Order Marks)
The followings items are mostly for reference and general Unicode
knowledge, Perl doesn't use these constructs internally.
UTF-16 is a 2 or 4 byte encoding. The Unicode code points
"U+0000..U+FFFF" are stored in a single 16-bit unit, and the code
points "U+10000..U+10FFFF" in two 16-bit units. The latter case is
using surrogates, the first 16-bit unit being the high surrogate,
and the second being the low surrogate.
Surrogates are code points set aside to encode the
"U+10000..U+10FFFF" range of Unicode code points in pairs of 16-bit
units. The high surrogates are the range "U+D800..U+DBFF", and the
low surrogates are the range "U+DC00..U+DFFF". The surrogate
encoding is
$hi = ($uni - 0x10000) / 0x400 + 0xD800;
$lo = ($uni - 0x10000) % 0x400 + 0xDC00;
and the decoding is
$uni = 0x10000 + ($hi - 0xD800) * 0x400 + ($lo - 0xDC00);
If you try to generate surrogates (for example by using chr()), you
will get a warning if warnings are turned on, because those code
points are not valid for a Unicode character.
Because of the 16-bitness, UTF-16 is byte-order dependent. UTF-16
itself can be used for in-memory computations, but if storage or
transfer is required either UTF-16BE (big-endian) or UTF-16LE (lit-
tle-endian) encodings must be chosen.
This introduces another problem: what if you just know that your
data is UTF-16, but you don't know which endianness? Byte Order
Marks, or BOMs, are a solution to this. A special character has
been reserved in Unicode to function as a byte order marker: the
character with the code point "U+FEFF" is the BOM.
The trick is that if you read a BOM, you will know the byte order,
since if it was written on a big-endian platform, you will read the
bytes "0xFE 0xFF", but if it was written on a little-endian plat-
form, you will read the bytes "0xFF 0xFE". (And if the originating
platform was writing in UTF-8, you will read the bytes "0xEF 0xBB
0xBF".)
The way this trick works is that the character with the code point
"U+FFFE" is guaranteed not to be a valid Unicode character, so the
sequence of bytes "0xFF 0xFE" is unambiguously "BOM, represented in
little-endian format" and cannot be "U+FFFE", represented in big-
endian format".
o UTF-32, UTF-32BE, UTF32-LE
The UTF-32 family is pretty much like the UTF-16 family, expect
that the units are 32-bit, and therefore the surrogate scheme is
not needed. The BOM signatures will be "0x00 0x00 0xFE 0xFF" for
BE and "0xFF 0xFE 0x00 0x00" for LE.
o UCS-2, UCS-4
Encodings defined by the ISO 10646 standard. UCS-2 is a 16-bit
encoding. Unlike UTF-16, UCS-2 is not extensible beyond "U+FFFF",
because it does not use surrogates. UCS-4 is a 32-bit encoding,
functionally identical to UTF-32.
o UTF-7
A seven-bit safe (non-eight-bit) encoding, which is useful if the
transport or storage is not eight-bit safe. Defined by RFC 2152.
Security Implications of Unicode
o Malformed UTF-8
Unfortunately, the specification of UTF-8 leaves some room for
interpretation of how many bytes of encoded output one should gen-
erate from one input Unicode character. Strictly speaking, the
shortest possible sequence of UTF-8 bytes should be generated,
because otherwise there is potential for an input buffer overflow
at the receiving end of a UTF-8 connection. Perl always generates
the shortest length UTF-8, and with warnings on Perl will warn
about non-shortest length UTF-8 along with other malformations,
such as the surrogates, which are not real Unicode code points.
o Regular expressions behave slightly differently between byte data
and character (Unicode) data. For example, the "word character"
character class "\w" will work differently depending on if data is
eight-bit bytes or Unicode.
In the first case, the set of "\w" characters is either small--the
default set of alphabetic characters, digits, and the "_"--or, if
you are using a locale (see perllocale), the "\w" might contain a
few more letters according to your language and country.
In the second case, the "\w" set of characters is much, much
larger. Most importantly, even in the set of the first 256 charac-
ters, it will probably match different characters: unlike most
locales, which are specific to a language and country pair, Unicode
classifies all the characters that are letters somewhere as "\w".
For example, your locale might not think that LATIN SMALL LETTER
ETH is a letter (unless you happen to speak Icelandic), but Unicode
does.
As discussed elsewhere, Perl has one foot (two hooves?) planted in
each of two worlds: the old world of bytes and the new world of
characters, upgrading from bytes to characters when necessary. If
your legacy code does not explicitly use Unicode, no automatic
switch-over to characters should happen. Characters shouldn't get
downgraded to bytes, either. It is possible to accidentally mix
bytes and characters, however (see perluniintro), in which case
"\w" in regular expressions might start behaving differently.
Review your code. Use warnings and the "strict" pragma.
Unicode in Perl on EBCDIC
The way Unicode is handled on EBCDIC platforms is still experimental.
On such platforms, references to UTF-8 encoding in this document and
elsewhere should be read as meaning the UTF-EBCDIC specified in Unicode
Technical Report 16, unless ASCII vs. EBCDIC issues are specifically
discussed. There is no "utfebcdic" pragma or ":utfebcdic" layer;
rather, "utf8" and ":utf8" are reused to mean the platform's "natural"
8-bit encoding of Unicode. See perlebcdic for more discussion of the
issues.
Locales
Usually locale settings and Unicode do not affect each other, but there
are a couple of exceptions:
o If your locale environment variables (LANGUAGE, LC_ALL, LC_CTYPE,
LANG) contain the strings 'UTF-8' or 'UTF8' (case-insensitive
matching), the default encodings of your STDIN, STDOUT, and STDERR,
and of any subsequent file open, are considered to be UTF-8.
o Perl tries really hard to work both with Unicode and the old byte-
oriented world. Most often this is nice, but sometimes Perl's
straddling of the proverbial fence causes problems.
Using Unicode in XS
If you want to handle Perl Unicode in XS extensions, you may find the
following C APIs useful. See perlapi for details.
o "DO_UTF8(sv)" returns true if the "UTF8" flag is on and the bytes
pragma is not in effect. "SvUTF8(sv)" returns true is the "UTF8"
flag is on; the bytes pragma is ignored. The "UTF8" flag being on
does not mean that there are any characters of code points greater
than 255 (or 127) in the scalar or that there are even any charac-
ters in the scalar. What the "UTF8" flag means is that the
sequence of octets in the representation of the scalar is the
sequence of UTF-8 encoded code points of the characters of a
string. The "UTF8" flag being off means that each octet in this
representation encodes a single character with code point 0..255
within the string. Perl's Unicode model is not to use UTF-8 until
it is absolutely necessary.
o "uvuni_to_utf8(buf, chr") writes a Unicode character code point
into a buffer encoding the code point as UTF-8, and returns a
pointer pointing after the UTF-8 bytes.
o "utf8_to_uvuni(buf, lenp)" reads UTF-8 encoded bytes from a buffer
and returns the Unicode character code point and, optionally, the
length of the UTF-8 byte sequence.
o "utf8_length(start, end)" returns the length of the UTF-8 encoded
buffer in characters. "sv_len_utf8(sv)" returns the length of the
UTF-8 encoded scalar.
o "sv_utf8_upgrade(sv)" converts the string of the scalar to its
UTF-8 encoded form. "sv_utf8_downgrade(sv)" does the opposite, if
possible. "sv_utf8_encode(sv)" is like sv_utf8_upgrade except that
it does not set the "UTF8" flag. "sv_utf8_decode()" does the oppo-
site of "sv_utf8_encode()". Note that none of these are to be used
as general-purpose encoding or decoding interfaces: "use Encode"
for that. "sv_utf8_upgrade()" is affected by the encoding pragma
but "sv_utf8_downgrade()" is not (since the encoding pragma is
designed to be a one-way street).
o is_utf8_char(s) returns true if the pointer points to a valid UTF-8
character.
o "is_utf8_string(buf, len)" returns true if "len" bytes of the
buffer are valid UTF-8.
o "UTF8SKIP(buf)" will return the number of bytes in the UTF-8
encoded character in the buffer. "UNISKIP(chr)" will return the
number of bytes required to UTF-8-encode the Unicode character code
point. "UTF8SKIP()" is useful for example for iterating over the
characters of a UTF-8 encoded buffer; "UNISKIP()" is useful, for
example, in computing the size required for a UTF-8 encoded buffer.
o "utf8_distance(a, b)" will tell the distance in characters between
the two pointers pointing to the same UTF-8 encoded buffer.
o "utf8_hop(s, off)" will return a pointer to an UTF-8 encoded buffer
that is "off" (positive or negative) Unicode characters displaced
from the UTF-8 buffer "s". Be careful not to overstep the buffer:
"utf8_hop()" will merrily run off the end or the beginning of the
buffer if told to do so.
o "pv_uni_display(dsv, spv, len, pvlim, flags)" and "sv_uni_dis-
play(dsv, ssv, pvlim, flags)" are useful for debugging the output
of Unicode strings and scalars. By default they are useful only
for debugging--they display all characters as hexadecimal code
points--but with the flags "UNI_DISPLAY_ISPRINT", "UNI_DIS-
PLAY_BACKSLASH", and "UNI_DISPLAY_QQ" you can make the output more
readable.
o "ibcmp_utf8(s1, pe1, u1, l1, u1, s2, pe2, l2, u2)" can be used to
compare two strings case-insensitively in Unicode. For case-sensi-
tive comparisons you can just use "memEQ()" and "memNE()" as usual.
For more information, see perlapi, and utf8.c and utf8.h in the Perl
source code distribution.
BUGS
Interaction with Locales
Use of locales with Unicode data may lead to odd results. Currently,
Perl attempts to attach 8-bit locale info to characters in the range
0..255, but this technique is demonstrably incorrect for locales that
use characters above that range when mapped into Unicode. Perl's Uni-
code support will also tend to run slower. Use of locales with Unicode
is discouraged.
Interaction with Extensions
When Perl exchanges data with an extension, the extension should be
able to understand the UTF-8 flag and act accordingly. If the extension
doesn't know about the flag, it's likely that the extension will return
incorrectly-flagged data.
So if you're working with Unicode data, consult the documentation of
every module you're using if there are any issues with Unicode data
exchange. If the documentation does not talk about Unicode at all, sus-
pect the worst and probably look at the source to learn how the module
is implemented. Modules written completely in Perl shouldn't cause
problems. Modules that directly or indirectly access code written in
other programming languages are at risk.
For affected functions, the simple strategy to avoid data corruption is
to always make the encoding of the exchanged data explicit. Choose an
encoding that you know the extension can handle. Convert arguments
passed to the extensions to that encoding and convert results back from
that encoding. Write wrapper functions that do the conversions for you,
so you can later change the functions when the extension catches up.
To provide an example, let's say the popular Foo::Bar::escape_html
function doesn't deal with Unicode data yet. The wrapper function would
convert the argument to raw UTF-8 and convert the result back to Perl's
internal representation like so:
sub my_escape_html ($) {
my($what) = shift;
return unless defined $what;
Encode::decode_utf8(Foo::Bar::escape_html(Encode::encode_utf8($what)));
}
Sometimes, when the extension does not convert data but just stores and
retrieves them, you will be in a position to use the otherwise danger-
ous Encode::_utf8_on() function. Let's say the popular "Foo::Bar"
extension, written in C, provides a "param" method that lets you store
and retrieve data according to these prototypes:
$self->param($name, $value); # set a scalar
$value = $self->param($name); # retrieve a scalar
If it does not yet provide support for any encoding, one could write a
derived class with such a "param" method:
sub param {
my($self,$name,$value) = @_;
utf8::upgrade($name); # make sure it is UTF-8 encoded
if (defined $value)
utf8::upgrade($value); # make sure it is UTF-8 encoded
return $self->SUPER::param($name,$value);
} else {
my $ret = $self->SUPER::param($name);
Encode::_utf8_on($ret); # we know, it is UTF-8 encoded
return $ret;
}
}
Some extensions provide filters on data entry/exit points, such as
DB_File::filter_store_key and family. Look out for such filters in the
documentation of your extensions, they can make the transition to Uni-
code data much easier.
Speed
Some functions are slower when working on UTF-8 encoded strings than on
byte encoded strings. All functions that need to hop over characters
such as length(), substr() or index() can work much faster when the
underlying data are byte-encoded. Witness the following benchmark:
% perl -e '
use Benchmark;
use strict;
our $l = 10000;
our $u = our $b = "x" x $l;
substr($u,0,1) = "\x{100}";
timethese(-2,{
LENGTH_B => q{ length($b) },
LENGTH_U => q{ length($u) },
SUBSTR_B => q{ substr($b, $l/4, $l/2) },
SUBSTR_U => q{ substr($u, $l/4, $l/2) },
});
'
Benchmark: running LENGTH_B, LENGTH_U, SUBSTR_B, SUBSTR_U for at least 2 CPU seconds...
LENGTH_B: 2 wallclock secs ( 2.36 usr + 0.00 sys = 2.36 CPU) @ 5649983.05/s (n=13333960)
LENGTH_U: 2 wallclock secs ( 2.11 usr + 0.00 sys = 2.11 CPU) @ 12155.45/s (n=25648)
SUBSTR_B: 3 wallclock secs ( 2.16 usr + 0.00 sys = 2.16 CPU) @ 374480.09/s (n=808877)
SUBSTR_U: 2 wallclock secs ( 2.11 usr + 0.00 sys = 2.11 CPU) @ 6791.00/s (n=14329)
The numbers show an incredible slowness on long UTF-8 strings. You
should carefully avoid using these functions in tight loops. If you
want to iterate over characters, the superior coding technique would
split the characters into an array instead of using substr, as the fol-
lowing benchmark shows:
% perl -e '
use Benchmark;
use strict;
our $l = 10000;
our $u = our $b = "x" x $l;
substr($u,0,1) = "\x{100}";
timethese(-5,{
SPLIT_B => q{ for my $c (split //, $b){} },
SPLIT_U => q{ for my $c (split //, $u){} },
SUBSTR_B => q{ for my $i (0..length($b)-1){my $c = substr($b,$i,1);} },
SUBSTR_U => q{ for my $i (0..length($u)-1){my $c = substr($u,$i,1);} },
});
'
Benchmark: running SPLIT_B, SPLIT_U, SUBSTR_B, SUBSTR_U for at least 5 CPU seconds...
SPLIT_B: 6 wallclock secs ( 5.29 usr + 0.00 sys = 5.29 CPU) @ 56.14/s (n=297)
SPLIT_U: 5 wallclock secs ( 5.17 usr + 0.01 sys = 5.18 CPU) @ 55.21/s (n=286)
SUBSTR_B: 5 wallclock secs ( 5.34 usr + 0.00 sys = 5.34 CPU) @ 123.22/s (n=658)
SUBSTR_U: 7 wallclock secs ( 6.20 usr + 0.00 sys = 6.20 CPU) @ 0.81/s (n=5)
Even though the algorithm based on "substr()" is faster than "split()"
for byte-encoded data, it pales in comparison to the speed of "split()"
when used with UTF-8 data.
SEE ALSO
perluniintro, encoding, Encode, open, utf8, bytes, perlretut,
"${^WIDE_SYSTEM_CALLS}" in perlvar
perl v5.8.0 2002-06-08 PERLUNICODE(1)
Man(1) output converted with
man2html