{ $values { "str" "a string" } { "upper" "a string" } }\r
{ $description "Converts an ASCII string to upper case." } ;\r
\r
-ARTICLE: "ascii" "ASCII character classes"\r
-"The " { $vocab-link "ascii" } " vocabulary implements traditional ASCII character classes:"\r
+ARTICLE: "ascii" "ASCII"\r
+"The " { $vocab-link "ascii" } " vocabulary implements support for the legacy ASCII character set. Most applications should use " { $link "unicode" } " instead."\r
+$nl\r
+"ASCII character classes:"\r
{ $subsection blank? }\r
{ $subsection letter? }\r
{ $subsection LETTER? }\r
@@ -67,11+69,10 @@ ARTICLE: "ascii" "ASCII character classes"
{ $subsection control? }\r
{ $subsection quotable? }\r
{ $subsection ascii? }\r
-"ASCII case conversion is also implemented:"\r
+"ASCII case conversion:"\r
{ $subsection ch>lower }\r
{ $subsection ch>upper }\r
{ $subsection >lower }\r
-{ $subsection >upper }\r
-"Modern applications should use Unicode 5.1 instead (" { $vocab-link "unicode.categories" } ")." ;\r
! See http://factorcode.org/license.txt for BSD license.
USING: help.markup help.syntax kernel ;
IN: unicode.categories
HELP: LETTER
-{ $class-description "The class of upper cased letters" } ;
+{ $class-description "The class of upper cased letters." } ;
HELP: Letter
-{ $class-description "The class of letters" } ;
+{ $class-description "The class of letters." } ;
HELP: alpha
-{ $class-description "The class of code points which are alphanumeric" } ;
+{ $class-description "The class of alphanumeric characters." } ;
HELP: blank
-{ $class-description "The class of code points which are whitespace" } ;
+{ $class-description "The class of whitespace characters." } ;
HELP: character
-{ $class-description "The class of numbers which are pre-defined Unicode code points" } ;
+{ $class-description "The class of pre-defined Unicode code points." } ;
HELP: control
-{ $class-description "The class of control characters" } ;
+{ $class-description "The class of control characters." } ;
HELP: digit
-{ $class-description "The class of code coints which are digits" } ;
+{ $class-description "The class of digits." } ;
HELP: letter
-{ $class-description "The class of code points which are lower-cased letters" } ;
+{ $class-description "The class of lower-cased letters." } ;
HELP: printable
-{ $class-description "The class of characters which are printable, as opposed to being control or formatting characters" } ;
+{ $class-description "The class of characters which are printable, as opposed to being control or formatting characters." } ;
HELP: uncased
-{ $class-description "The class of letters which don't have a case" } ;
+{ $class-description "The class of letters which don't have a case." } ;
ARTICLE: "unicode.categories" "Character classes"
-{ $vocab-link "unicode.categories" } " is a vocabulary which provides predicates for determining if a code point has a particular property, for example being a lower cased letter. These should be used in preference to the " { $vocab-link "ascii" } " equivalents in most cases. Below are links to classes of characters, but note that each of these also has a predicate defined, which is usually more useful."
+"The " { $vocab-link "unicode.categories" } " vocabulary implements predicates for determining if a code point has a particular property, for example being a lower cased letter. These should be used in preference to the " { $vocab-link "ascii" } " equivalents in most cases. Each character class has an associated predicate word."
-"The " { $vocab-link "unicode.normalize" "unicode.normalize" } " vocabulary defines words for normalizing Unicode strings. In Unicode, it is often possible to have multiple sequences of characters which really represent exactly the same thing. For example, to represent e with an acute accent above, there are two possible strings: \"e\\u000301\" (the e character, followed by the combining acute accent character) and \"\\u0000e9\" (a single character, e with an acute accent). There are four normalization forms: NFD, NFC, NFKD, and NFKC. Basically, in NFD and NFKD, everything is expanded, whereas in NFC and NFKC, everything is contracted. In NFKD and NFKC, more things are expanded and contracted. This is a process which loses some information, so it should be done only with care. Most of the world uses NFC to communicate, but for many purposes, NFD/NFKD is easier to process. For more information, see Unicode Standard Annex #15 and section 3 of the Unicode standard."
+"The " { $vocab-link "unicode.normalize" "unicode.normalize" } " vocabulary defines words for normalizing Unicode strings."
+$nl
+"In Unicode, it is often possible to have multiple sequences of characters which really represent exactly the same thing. For example, to represent e with an acute accent above, there are two possible strings: " { $snippet "\"e\\u000301\"" } " (the e character, followed by the combining acute accent character) and " { $snippet "\"\\u0000e9\"" } " (a single character, e with an acute accent)."
+$nl
+"There are four normalization forms: NFD, NFC, NFKD, and NFKC. Basically, in NFD and NFKD, everything is expanded, whereas in NFC and NFKC, everything is contracted. In NFKD and NFKC, more things are expanded and contracted. This is a process which loses some information, so it should be done only with care."
+$nl
+"Most of the world uses NFC to communicate, but for many purposes, NFD/NFKD is easier to process. For more information, see Unicode Standard Annex #15 and section 3 of the Unicode standard."
-"Unicode is a set of characters, or " { $emphasis "code points" } " covering what's used in most world writing systems. Any Factor string can hold any of these code points transparently; a factor string is a sequence of Unicode code points. Unicode is accompanied by several standard algorithms for common operations like encoding in files, capitalizing a string, finding the boundaries between words, etc. When a programmer is faced with a string manipulation problem, where the string represents human language, a Unicode algorithm is often much better than the naive one. This is not in terms of efficiency, but rather internationalization. Even English text that remains in ASCII is better served by the Unicode collation algorithm than a naive algorithm. The Unicode algorithms implemented here are:"
+"The " { $vocab-link "unicode" } " vocabulary and its sub-vocabularies implement support for the Unicode 5.1 character set."
+$nl
+"The Unicode character set contains most of the world's writing systems. Unicode is intended as a replacement for, and is a superset of, such legacy character sets as ASCII, Latin1, MacRoman, and so on. Unicode characters are called " { $emphasis "code points" } "; Factor's " { $link "strings" } " are sequences of code points."
+$nl
+"The Unicode character set is accompanied by several standard algorithms for common operations like encoding text in files, capitalizing a string, finding the boundaries between words, and so on."
+$nl
+"The Unicode algorithms implemented by the " { $vocab-link "unicode" } " vocabulary are:"
"Since strings are sequences, basic string manipulation can be performed using sequence operations (" { $link "sequences" } "). More advanced functionality can be found in other vocabularies, including but not limited to:"
{ $list
- { { $vocab-link "ascii" } " - traditional ASCII character classes" }