mirror of
https://github.com/open-webui/open-webui
synced 2025-03-04 19:38:54 +00:00
Update zh-CN translation
1. remove the typo ”‘’" for -1 表示无限制,正整数表示具体限制” 2. Token should be kept rather than translated as "标记" 3. Max Tokens (num_predict) should be "最大Token数量 (num_predict)" 4. "Enter Jupyter Token" here “Token" could be translated as ”令牌“ just as "JWT Token": "JWT 令牌", (line 582) 5. "TLS" which means "Transport Layer Security" should be translated to "传输层安全协议" 6. "Tokens To Keep On Context Refresh (num_keep)" "在语境刷新时需保留的 Token 数量" 7. change token to "Token" in the Chinese translation.
This commit is contained in:
parent
6fedd72e39
commit
1332a0d381
@ -1,5 +1,5 @@
|
||||
{
|
||||
"-1 for no limit, or a positive integer for a specific limit": "-1 表示无限制,正整数表示具体限制”",
|
||||
"-1 for no limit, or a positive integer for a specific limit": "-1 表示无限制,正整数表示具体限制",
|
||||
"'s', 'm', 'h', 'd', 'w' or '-1' for no expiration.": "'s', 'm', 'h', 'd', 'w' 或 '-1' 表示无过期时间。",
|
||||
"(e.g. `sh webui.sh --api --api-auth username_password`)": "(例如 `sh webui.sh --api --api-auth username_password`)",
|
||||
"(e.g. `sh webui.sh --api`)": "(例如 `sh webui.sh --api`)",
|
||||
@ -63,7 +63,7 @@
|
||||
"Allow Voice Interruption in Call": "允许通话中的打断语音",
|
||||
"Allowed Endpoints": "允许的端点",
|
||||
"Already have an account?": "已经拥有账号了?",
|
||||
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "top_p的替代方法,目标是在质量和多样性之间取得平衡。参数p表示一个token相对于最有可能的token所需的最低概率。比如,当p=0.05且最有可能的token概率为0.9时,概率低于0.045的logits会被排除。(默认值:0.0)",
|
||||
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "top_p的替代方法,目标是在质量和多样性之间取得平衡。参数p表示一个Token相对于最有可能的Token所需的最低概率。比如,当p=0.05且最有可能的Token概率为0.9时,概率低于0.045的logits会被排除。(默认值:0.0)",
|
||||
"Always": "保持",
|
||||
"Amazing": "很棒",
|
||||
"an assistant": "一个助手",
|
||||
@ -380,7 +380,7 @@
|
||||
"Enter Image Size (e.g. 512x512)": "输入图像分辨率 (例如:512x512)",
|
||||
"Enter Jina API Key": "输入 Jina API 密钥",
|
||||
"Enter Jupyter Password": "输入 Jupyter 密码",
|
||||
"Enter Jupyter Token": "输入 Jupyter Token",
|
||||
"Enter Jupyter Token": "输入 Jupyter 令牌",
|
||||
"Enter Jupyter URL": "输入 Jupyter URL",
|
||||
"Enter Kagi Search API Key": "输入 Kagi Search API 密钥",
|
||||
"Enter language codes": "输入语言代码",
|
||||
@ -629,7 +629,7 @@
|
||||
"Manage OpenAI API Connections": "管理OpenAI API连接",
|
||||
"Manage Pipelines": "管理 Pipeline",
|
||||
"March": "三月",
|
||||
"Max Tokens (num_predict)": "最多 Token (num_predict)",
|
||||
"Max Tokens (num_predict)": "最大Token数量 (num_predict)",
|
||||
"Max Upload Count": "最大上传数量",
|
||||
"Max Upload Size": "最大上传大小",
|
||||
"Maximum of 3 models can be downloaded simultaneously. Please try again later.": "最多可以同时下载 3 个模型,请稍后重试。",
|
||||
@ -910,14 +910,14 @@
|
||||
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "设置用于计算的工作线程数量。该选项可控制并发处理传入请求的线程数量。增加该值可以提高高并发工作负载下的性能,但也可能消耗更多的 CPU 资源。",
|
||||
"Set Voice": "设置音色",
|
||||
"Set whisper model": "设置 whisper 模型",
|
||||
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "这个设置项用于调整对重复 tokens 的抑制强度。当某个 token 至少出现过一次后,系统会通过 flat bias 参数施加惩罚力度:数值越大(如 1.5),抑制重复的效果越强烈;数值较小(如 0.9)则相对宽容。当设为 0 时,系统会完全关闭这个重复抑制功能(默认值为 0)。",
|
||||
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "这个参数用于通过 scaling bias 机制抑制重复内容:当某些 tokens 重复出现时,系统会根据它们已出现的次数自动施加惩罚。数值越大(如 1.5)惩罚力度越强,能更有效减少重复;数值较小(如 0.9)则允许更多重复。当设为 0 时完全关闭该功能,默认值设置为 1.1 保持适度抑制。",
|
||||
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "这个设置项用于调整对重复 Token 的抑制强度。当某个 Token 至少出现过一次后,系统会通过 flat bias 参数施加惩罚力度:数值越大(如 1.5),抑制重复的效果越强烈;数值较小(如 0.9)则相对宽容。当设为 0 时,系统会完全关闭这个重复抑制功能(默认值为 0)。",
|
||||
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "这个参数用于通过 scaling bias 机制抑制重复内容:当某些 Token 重复出现时,系统会根据它们已出现的次数自动施加惩罚。数值越大(如 1.5)惩罚力度越强,能更有效减少重复;数值较小(如 0.9)则允许更多重复。当设为 0 时完全关闭该功能,默认值设置为 1.1 保持适度抑制。",
|
||||
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "设置模型回溯多远以防止重复。(默认值:64,0 = 禁用,-1 = num_ctx)",
|
||||
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "设置 random number seed 可以控制模型生成文本的随机起点。如果指定一个具体数字,当输入相同的提示语时,模型每次都会生成完全相同的文本内容(默认是随机选取 seed)。",
|
||||
"Sets the size of the context window used to generate the next token. (Default: 2048)": "设置用于生成下一个 Token 的上下文大小。(默认值:2048)",
|
||||
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "设置要使用的停止序列。遇到这种模式时,大语言模型将停止生成文本并返回。可以通过在模型文件中指定多个单独的停止参数来设置多个停止模式。",
|
||||
"Settings": "设置",
|
||||
"Settings saved successfully!": "设置已保存",
|
||||
"Settings saved successfully!": "设置已成功保存!",
|
||||
"Share": "分享",
|
||||
"Share Chat": "分享对话",
|
||||
"Share to Open WebUI Community": "分享到 OpenWebUI 社区",
|
||||
@ -956,7 +956,7 @@
|
||||
"System Prompt": "系统提示词 (System Prompt)",
|
||||
"Tags Generation": "标签生成",
|
||||
"Tags Generation Prompt": "标签生成提示词",
|
||||
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "Tail free sampling 用于减少输出中可能性较低的标记的影响。数值越大(如 2.0),影响就越小,而数值为 1.0 则会禁用此设置。(默认值:1)",
|
||||
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "Tail free sampling 用于减少输出中可能性较低的Token的影响。数值越大(如 2.0),影响就越小,而数值为 1.0 则会禁用此设置。(默认值:1)",
|
||||
"Tap to interrupt": "点击以中断",
|
||||
"Tasks": "任务",
|
||||
"Tavily API Key": "Tavily API 密钥",
|
||||
@ -985,7 +985,7 @@
|
||||
"This action cannot be undone. Do you wish to continue?": "此操作无法撤销。是否确认继续?",
|
||||
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "这将确保您的宝贵对话被安全地保存到后台数据库中。感谢!",
|
||||
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "这是一个实验功能,可能不会如预期那样工作,而且可能随时发生变化。",
|
||||
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "该选项控制刷新上下文时保留多少标记。例如,如果设置为 2,就会保留对话上下文的最后 2 个标记。保留上下文有助于保持对话的连续性,但可能会降低回复新话题的能力。(默认值:24)",
|
||||
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "该选项控制刷新上下文时保留多少Token。例如,如果设置为 2,就会保留对话上下文的最后 2 个Token。保留上下文有助于保持对话的连续性,但可能会降低回复新话题的能力。(默认值:24)",
|
||||
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "此选项设置了模型在回答中可以生成的最大 Token 数。增加这个限制可以让模型提供更长的答案,但也可能增加生成无用或不相关内容的可能性。 (默认值:128)",
|
||||
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "此选项将会删除文件集中所有文件,并用新上传的文件替换。",
|
||||
"This response was generated by \"{{model}}\"": "此回复由 \"{{model}}\" 生成",
|
||||
@ -1007,7 +1007,7 @@
|
||||
"Title cannot be an empty string.": "标题不能为空。",
|
||||
"Title Generation": "标题生成",
|
||||
"Title Generation Prompt": "用于自动生成标题的提示词",
|
||||
"TLS": "TLS",
|
||||
"TLS": "传输层安全协议",
|
||||
"To access the available model names for downloading,": "要访问可下载的模型名称,",
|
||||
"To access the GGUF models available for downloading,": "要访问可下载的 GGUF 模型,",
|
||||
"To access the WebUI, please reach out to the administrator. Admins can manage user statuses from the Admin Panel.": "请联系管理员以访问。管理员可以在后台管理面板中管理用户状态。",
|
||||
@ -1022,7 +1022,7 @@
|
||||
"Toggle settings": "切换设置",
|
||||
"Toggle sidebar": "切换侧边栏",
|
||||
"Token": "Token",
|
||||
"Tokens To Keep On Context Refresh (num_keep)": "在语境刷新时需保留的 Tokens",
|
||||
"Tokens To Keep On Context Refresh (num_keep)": "在语境刷新时需保留的 Token 数量",
|
||||
"Too verbose": "过于冗长",
|
||||
"Tool created successfully": "工具创建成功",
|
||||
"Tool deleted successfully": "工具删除成功",
|
||||
|
Loading…
Reference in New Issue
Block a user