as_tokens | Create a list of tokens |
bind_lr | Bind importance of bigrams |
bind_tf_idf2 | Bind the term frequency and inverse document frequency |
collapse_tokens | Collapse sequences of tokens by condition |
dictionary_info | Get dictionary information |
gbs_tokenize | Tokenize sentences using 'MeCab' |
get_dict_features | Get dictionary's features |
is_blank | Check if scalars are blank |
lex_density | Calculate lexical density |
mute_tokens | Mute tokens by condition |
pack | Pack prettified data.frame of tokens |
prettify | Prettify tokenized output |
tokenize | Tokenize sentences using 'MeCab' |