Tokens
A token is the smallest unit of text that a large language model (LLM) processes. Tokens can be characters, words, or parts of words, depending on how the model breaks down the text. The specific form of a token depends on the tokenization strategy method used by the large language model selected.
The total number of tokens includes both the tokens input to the LLM and the tokens generated by the LLM during a single API call or interaction.