Fast, Consistent Tokenization of Natural Language Text (original) (raw)
The tokenizers in this package have a consistent interface. They all take either a character vector of any length, or a list where each element is a character vector of length one, or a data.frame that adheres to the tif corpus format. The idea is that each element (or row) comprises a text. Then each function returns a list with the same length as the input vector, where each element in the list contains the tokens generated by the function. If the input character vector or list is named, then the names are preserved, so that the names can serve as identifiers. For a tif-formatted data.frame, the doc_id
field is used as the element names in the returned token list.
library(magrittr)
library(tokenizers)
james <- paste0(
"The question thus becomes a verbal one\n",
"again; and our knowledge of all these early stages of thought and feeling\n",
"is in any case so conjectural and imperfect that farther discussion would\n",
"not be worth while.\n",
"\n",
"Religion, therefore, as I now ask you arbitrarily to take it, shall mean\n",
"for us _the feelings, acts, and experiences of individual men in their\n",
"solitude, so far as they apprehend themselves to stand in relation to\n",
"whatever they may consider the divine_. Since the relation may be either\n",
"moral, physical, or ritual, it is evident that out of religion in the\n",
"sense in which we take it, theologies, philosophies, and ecclesiastical\n",
"organizations may secondarily grow.\n"
)
names(james) <- "varieties"
tokenize_characters(james)[[1]] %>% head(50)
#> [1] "t" "h" "e" "q" "u" "e" "s" "t" "i" "o" "n" "t" "h" "u" "s" "b" "e" "c" "o"
#> [20] "m" "e" "s" "a" "v" "e" "r" "b" "a" "l" "o" "n" "e" "a" "g" "a" "i" "n" "a"
#> [39] "n" "d" "o" "u" "r" "k" "n" "o" "w" "l" "e" "d"
tokenize_character_shingles(james)[[1]] %>% head(20)
#> [1] "the" "heq" "equ" "que" "ues" "est" "sti" "tio" "ion" "ont" "nth" "thu"
#> [13] "hus" "usb" "sbe" "bec" "eco" "com" "ome" "mes"
tokenize_words(james)[[1]] %>% head(10)
#> [1] "the" "question" "thus" "becomes" "a" "verbal"
#> [7] "one" "again" "and" "our"
tokenize_word_stems(james)[[1]] %>% head(10)
#> [1] "the" "question" "thus" "becom" "a" "verbal"
#> [7] "one" "again" "and" "our"
tokenize_sentences(james)
#> $varieties
#> [1] "The question thus becomes a verbal one again; and our knowledge of all these early stages of thought and feeling is in any case so conjectural and imperfect that farther discussion would not be worth while."
#> [2] "Religion, therefore, as I now ask you arbitrarily to take it, shall mean for us _the feelings, acts, and experiences of individual men in their solitude, so far as they apprehend themselves to stand in relation to whatever they may consider the divine_."
#> [3] "Since the relation may be either moral, physical, or ritual, it is evident that out of religion in the sense in which we take it, theologies, philosophies, and ecclesiastical organizations may secondarily grow."
tokenize_paragraphs(james)
#> $varieties
#> [1] "The question thus becomes a verbal one again; and our knowledge of all these early stages of thought and feeling is in any case so conjectural and imperfect that farther discussion would not be worth while."
#> [2] "Religion, therefore, as I now ask you arbitrarily to take it, shall mean for us _the feelings, acts, and experiences of individual men in their solitude, so far as they apprehend themselves to stand in relation to whatever they may consider the divine_. Since the relation may be either moral, physical, or ritual, it is evident that out of religion in the sense in which we take it, theologies, philosophies, and ecclesiastical organizations may secondarily grow. "
tokenize_ngrams(james, n = 5, n_min = 2)[[1]] %>% head(10)
#> [1] "the question" "the question thus"
#> [3] "the question thus becomes" "the question thus becomes a"
#> [5] "question thus" "question thus becomes"
#> [7] "question thus becomes a" "question thus becomes a verbal"
#> [9] "thus becomes" "thus becomes a"
tokenize_skip_ngrams(james, n = 5, k = 2)[[1]] %>% head(10)
#> [1] "the" "the question" "the thus"
#> [4] "the becomes" "the question thus" "the question becomes"
#> [7] "the question a" "the thus becomes" "the thus a"
#> [10] "the thus verbal"
tokenize_ptb(james)[[1]] %>% head(10)
#> [1] "The" "question" "thus" "becomes" "a" "verbal"
#> [7] "one" "again" ";" "and"
tokenize_lines(james)[[1]] %>% head(5)
#> [1] "The question thus becomes a verbal one"
#> [2] "again; and our knowledge of all these early stages of thought and feeling"
#> [3] "is in any case so conjectural and imperfect that farther discussion would"
#> [4] "not be worth while."
#> [5] "Religion, therefore, as I now ask you arbitrarily to take it, shall mean"
The package also contains functions to count words, characters, and sentences, and these functions follow the same consistent interface.
The [chunk_text()](reference/chunk%5Ftext.html)
function splits a document into smaller chunks, each with the same number of words.