Loading request...
User explains the problem of text exceeding the context window, leading to forgotten instructions or cut-off information. This implies a need for better handling of long inputs or tools to manage token usage within AI models.
it's important to make sure that the text you paste in fits inside of the context window. Otherwise by the time the model has ingested the instruction, it may have forgotten what you asked at the top, or will cut off info from the bottom. Llama models use sentencepiece, but any online tokenizer will give a rough estimate of the amount of tokens you've put in.