i18n: en-US correction of num_keep parameter description

i18n: en-US correction of num_keep parameter description

Correction of num_keep parameter,
In actual description example is indicated that "last" x tokens will be retained, that is wrong, it have to say "first" x tokens.

Tokens to Keep on Context Refresh (num_keep): Retains part of the previous conversation. It's used when the n_ctx limit is reached. A new prompt will be constructed with the first n_keep characters of the original prompt plus the second half of the output to free up space for more conversation. Example: Keeping 50 first tokens helps the model remember the main topic when refreshing.

https://github.com/open-webui/open-webui/discussions/3794#discussioncomment-12691428
This commit is contained in:
_00_ 2025-06-19 14:18:15 +02:00
parent a196b9dc26
commit f939646364

View File

@ -1237,7 +1237,7 @@
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This model is not publicly available. Please select another model.": "",
"This option controls how long the model will stay loaded into memory following the request (default: 5m)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option controls how many first tokens are preserved when refreshing the context. For example, if set to 2, the first 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option enables or disables the use of the reasoning feature in Ollama, which allows the model to think before generating a response. When enabled, the model can take a moment to process the conversation context and generate a more thoughtful response.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "",