GPTF-8: A tokenizer-based character encoding
lesswrong.com·3h
Flag this post

Published on November 7, 2025 7:47 AM GMT

There are two steps to any byte-based character encoding.

The first and much more interesting step is the translation from written language -- in all its chaotic glory -- to a fixed inventory of "characters", from which any string can be built up as a sequence. In the modern day, this is almost always delegated to Unicode, a huge list of characters with slots (codepoints) numbered from 0 to 1,114,111 (or 0x10FFFF in hexadecimal), of which 159,801 are currently assigned actual characters. These characters include Latin letters, typographical symbols, Cyrillic, Greek, hanzi/kanji/hanja, emoji, hieroglyphs, and a bewildering variety of control codes and other special-purpose characters.

The boring step is assigni...

Similar Posts

Loading similar posts...