i18n: Update zh-CN

This commit is contained in:
qingchun 2025-06-14 02:34:08 +08:00 committed by GitHub
parent 4978dd9085
commit 652e1a6b19
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -209,8 +209,8 @@
"Clone Chat": "克隆对话",
"Clone of {{TITLE}}": "{{TITLE}} 的副本",
"Close": "关闭",
"Close modal": "",
"Close settings modal": "",
"Close modal": "关闭弹窗",
"Close settings modal": "关闭设置弹窗",
"Code execution": "代码执行",
"Code Execution": "代码执行",
"Code Execution Engine": "代码执行引擎",
@ -910,7 +910,7 @@
"Oops! You're using an unsupported method (frontend only). Please serve the WebUI from the backend.": "你正在使用不被支持的方法(仅运行前端服务)。需要后端提供 WebUI 服务。",
"Open file": "打开文件",
"Open in full screen": "全屏打开",
"Open modal to configure connection": "",
"Open modal to configure connection": "打开外部连接配置弹窗",
"Open new chat": "打开新对话",
"Open WebUI can use tools provided by any OpenAPI server.": "Open WebUI 可使用任何 OpenAPI 服务器提供的工具。",
"Open WebUI uses faster-whisper internally.": "Open WebUI 使用内置 faster-whisper",
@ -1195,7 +1195,7 @@
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "批处理大小决定了一次可以处理多少个文本请求。更高的批处理大小可以提高模型的性能和速度,但也需要更多内存。",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "本插件的开发者是社区中充满热情的志愿者。如果此插件有帮助到您,请考虑为开发贡献一份力量。",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "排行榜基于 Elo 评级系统并实时更新。",
"The format to return a response in. Format can be json or a JSON schema.": "",
"The format to return a response in. Format can be json or a JSON schema.": "响应返回格式。可为 json 或 JSON schema。",
"The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. Leave blank to automatically detect the language.": "输入音频的语言。以 ISO-639-1 格式例如en指定输入语言可提高准确性和响应速度。留空则自动检测语言。",
"The LDAP attribute that maps to the mail that users use to sign in.": "映射到用户登录时使用的邮箱的 LDAP 属性。",
"The LDAP attribute that maps to the username that users use to sign in.": "映射到用户登录时使用的用户名的 LDAP 属性。",
@ -1210,13 +1210,13 @@
"This action cannot be undone. Do you wish to continue?": "此操作无法撤销。你确认要继续吗?",
"This channel was created on {{createdAt}}. This is the very beginning of the {{channelName}} channel.": "此频道创建于{{createdAt}},这里是{{channelName}}频道的开始",
"This chat won't appear in history and your messages will not be saved.": "此对话不会出现在历史记录中,且您的消息不会被保存",
"This chat wont appear in history and your messages will not be saved.": "",
"This chat wont appear in history and your messages will not be saved.": "此对话不会出现在历史记录中,且您的消息不会被保存",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "这将确保您的宝贵对话被安全地保存到后台数据库中。感谢!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "这是一个实验性功能,可能不会如预期那样工作,而且可能随时发生变化。",
"This model is not publicly available. Please select another model.": "此模型未公开。请选择其他模型",
"This option controls how long the model will stay loaded into memory following the request (default: 5m)": "",
"This option controls how long the model will stay loaded into memory following the request (default: 5m)": "此选项控制模型请求后在内存中保持加载状态的时长默认5分钟",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "此选项控制刷新上下文时保留多少 Token。例如如果设置为 2则将保留对话上下文的最后 2 个 Token。保留上下文有助于保持对话的连续性但可能会降低响应新主题的能力。",
"This option enables or disables the use of the reasoning feature in Ollama, which allows the model to think before generating a response. When enabled, the model can take a moment to process the conversation context and generate a more thoughtful response.": "",
"This option enables or disables the use of the reasoning feature in Ollama, which allows the model to think before generating a response. When enabled, the model can take a moment to process the conversation context and generate a more thoughtful response.": "此选项用于启用或禁用 Ollama 的推理功能,该功能允许模型在生成响应前进行思考。启用后,模型需要花些时间处理对话上下文,从而生成更缜密的回复。",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "此项用于设置模型在其响应中可以生成的最大 Token 数。增加此限制可让模型提供更长的答案,但也可能增加生成无用或不相关内容的可能性。",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "此选项将会删除文件集中所有文件,并用新上传的文件替换。",
"This response was generated by \"{{model}}\"": "此回复由 \"{{model}}\" 生成",