BytePairEncoding.jl
Pure Julia implementation of the Byte Pair Encoding(BPE) method.
BytePairEncoding.BPELearner
BytePairEncoding.bbpe2tiktoken
BytePairEncoding.count_words
BytePairEncoding.gpt2_codemap
BytePairEncoding.gpt2_tokenizer
BytePairEncoding.load_gpt2
BytePairEncoding.load_tiktoken
BytePairEncoding.tiktoken2bbpe
BytePairEncoding.BPELearner
— TypeBPELearner(tokenization::AbstractTokenization; min_freq = 10, endsym = "</w>", sepsym = nothing)
Construct a learner with a tokenization
which has BPETokenization
and NoBPE
inside.
(bper::BPELearner)(word_counts, n_merge)
Calling the learner on a word_counts
dictionary (created by count_words
) generate a new tokenization
where NoBPE
is replaced with the learned BPE
.
BytePairEncoding.bbpe2tiktoken
— Functionbbpe2tiktoken(tkr)
Convert a gpt2-like byte-level tokenizer (with bpe::BPE
) to tiktoken tokenizer (with bpe::TikToken
). If there is a CodeNormalizer
in the tokenizer, it will be removed accordingly.
see also: tiktoken2bbpe
BytePairEncoding.count_words
— Methodcount_words(bper::BPELearner, files::AbstractVector)
Given a list of files (where each line of the file would be considered as a (multi-sentences) document). Tokenize those file a count the occurence of each word token.
BytePairEncoding.gpt2_codemap
— Methodthe codemap used by openai gpt2
BytePairEncoding.gpt2_tokenizer
— Methodthe tokenizer used by openai gpt2
BytePairEncoding.load_gpt2
— Methodload_gpt2()
Load gpt2 tokenizer.
BytePairEncoding.load_tiktoken
— Methodload_tiktoken(name)
Load tiktoken tokenizer. name
can be "cl100k_base"
, "p50k_base"
, "p50k_base"
, "r50k_base"
, or "gpt2"
.
BytePairEncoding.tiktoken2bbpe
— Functiontiktoken2bbpe(tkr, codemap::Union{CodeMap, Nothing} = nothing)
Convert a tiktoken tokenizer (with bpe::TikToken
) to gpt2-like byte-level tokenizer (with bpe::BPE
). If codemap
is provided, it will add the corresponding CodeNormalizer
to the tokenizer.
see also: bbpe2tiktoken